Predicting missing links via correlation between nodes
NASA Astrophysics Data System (ADS)
Liao, Hao; Zeng, An; Zhang, Yi-Cheng
2015-10-01
As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
Role of short-range correlation in facilitation of wave propagation in a long-range ladder chain
NASA Astrophysics Data System (ADS)
Farzadian, O.; Niry, M. D.
2018-09-01
We extend a new method for generating a random chain, which has a kind of short-range correlation induced by a repeated sequence while retaining long-range correlation. Three distinct methods are considered to study the localization-delocalization transition of mechanical waves in one-dimensional disordered media with simultaneous existence of short and long-range correlation. First, a transfer-matrix method was used to calculate numerically the localization length of a wave in a binary chain. We found that the existence of short-range correlation in a long-range correlated chain can increase the localization length at the resonance frequency Ωc. Then, we carried out an analytical study of the delocalization properties of the waves in correlated disordered media around Ωc. Finally, we apply a dynamical method based on the direct numerical simulation of the wave equation to study the propagation of waves in the correlated chain. Imposing short-range correlation on the long-range background will lead the propagation to super-diffusive transport. The results obtained with all three methods are in agreement with each other.
Ma, Chuang; Wang, Xiangfeng
2012-09-01
One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey's biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses.
Ma, Chuang; Wang, Xiangfeng
2012-01-01
One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey’s biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses. PMID:22797655
Multilabel learning via random label selection for protein subcellular multilocations prediction.
Wang, Xiao; Li, Guo-Zheng
2013-01-01
Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.
Pacini, Clare; Ajioka, James W; Micklem, Gos
2017-04-12
Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.
Determining the Number of Components from the Matrix of Partial Correlations
ERIC Educational Resources Information Center
Velicer, Wayne F.
1976-01-01
A method is presented for determining the number of components to retain in a principal components or image components analysis which utilizes a matrix of partial correlations. Advantages and uses of the method are discussed and a comparison of the proposed method with existing methods is presented. (JKS)
Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.
Accelerated Dynamic Corrosion Test Method Development
test method has poor correlation to outdoor exposures, particularly for non-chromate primers. As a result, more realistic cyclic environmental...exposures have been developed to more closely resemble actual atmospheric corrosion damage. Several existing tests correlate well with the outdoor performance
Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.
Wang, Xiao; Li, Guo-Zheng
2013-03-12
Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Cao, Minghua
2017-02-01
The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.
The QAP weighted network analysis method and its application in international services trade
NASA Astrophysics Data System (ADS)
Xu, Helian; Cheng, Long
2016-04-01
Based on QAP (Quadratic Assignment Procedure) correlation and complex network theory, this paper puts forward a new method named QAP Weighted Network Analysis Method. The core idea of the method is to analyze influences among relations in a social or economic group by building a QAP weighted network of networks of relations. In the QAP weighted network, a node depicts a relation and an undirect edge exists between any pair of nodes if there is significant correlation between relations. As an application of the QAP weighted network, we study international services trade by using the QAP weighted network, in which nodes depict 10 kinds of services trade relations. After the analysis of international services trade by QAP weighted network, and by using distance indicators, hierarchy tree and minimum spanning tree, the conclusion shows that: Firstly, significant correlation exists in all services trade, and the development of any one service trade will stimulate the other nine. Secondly, as the economic globalization goes deeper, correlations in all services trade have been strengthened continually, and clustering effects exist in those services trade. Thirdly, transportation services trade, computer and information services trade and communication services trade have the most influence and are at the core in all services trade.
Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model
Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge
2017-01-01
Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027
Menezes, Everardo Albuquerque; Vasconcelos Júnior, Antônio Alexandre de; Ângelo, Maria Rozzelê Ferreira; Cunha, Maria da Conceição dos Santos Oliveira; Cunha, Francisco Afrânio
2013-01-01
Antifungal susceptibility testing assists in finding the appropriate treatment for fungal infections, which are increasingly common. However, such testing is not very widespread. There are several existing methods, and the correlation between such methods was evaluated in this study. The susceptibility to fluconazole of 35 strains of Candida sp. isolated from blood cultures was evaluated by the following methods: microdilution, Etest, and disk diffusion. The correlation between the methods was around 90%. The disk diffusion test exhibited a good correlation and can be used in laboratory routines to detect strains of Candida sp. that are resistant to fluconazole.
Analysis of Genome-Wide Association Studies with Multiple Outcomes Using Penalization
Liu, Jin; Huang, Jian; Ma, Shuangge
2012-01-01
Genome-wide association studies have been extensively conducted, searching for markers for biologically meaningful outcomes and phenotypes. Penalization methods have been adopted in the analysis of the joint effects of a large number of SNPs (single nucleotide polymorphisms) and marker identification. This study is partly motivated by the analysis of heterogeneous stock mice dataset, in which multiple correlated phenotypes and a large number of SNPs are available. Existing penalization methods designed to analyze a single response variable cannot accommodate the correlation among multiple response variables. With multiple response variables sharing the same set of markers, joint modeling is first employed to accommodate the correlation. The group Lasso approach is adopted to select markers associated with all the outcome variables. An efficient computational algorithm is developed. Simulation study and analysis of the heterogeneous stock mice dataset show that the proposed method can outperform existing penalization methods. PMID:23272092
Baró, Jordi; Martín-Olalla, José-María; Romero, Francisco Javier; Gallardo, María Carmen; Salje, Ekhard K H; Vives, Eduard; Planes, Antoni
2014-03-26
The existence of temporal correlations during the intermittent dynamics of a thermally driven structural phase transition is studied in a Cu-Zn-Al alloy. The sequence of avalanches is observed by means of two techniques: acoustic emission and high sensitivity calorimetry. Both methods reveal the existence of event clustering in a way that is equivalent to the Omori correlations between aftershocks in earthquakes as are commonly used in seismology.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
Hanson, Jeffery A; Yang, Haw
2008-11-06
The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
A cross-correlation-based estimate of the galaxy luminosity function
NASA Astrophysics Data System (ADS)
van Daalen, Marcel P.; White, Martin
2018-06-01
We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
A basic guide to overlay design using nondestructive testing equipment data
NASA Astrophysics Data System (ADS)
Turner, Vernon R.
1990-08-01
The purpose of this paper is to provide a basic and concise guide to designing asphalt concrete (AC) overlays over existing AC pavements. The basis for these designs is deflection data obtained from nondestructive testing (NDT) equipment. This data is used in design procedures which produce required overlay thickness or an estimate of remaining pavement life. This guide enables one to design overlays or better monitor the designs being performed by others. This paper will discuss three types of NDT equipment, the Asphalt Institute Overlay Designs by Deflection Analysis and by the effective thickness method as well as a method of estimating remaining pavement life, correlations between NDT equipment and recent correlations in Washington State. Asphalt overlays provide one of the most cost effective methods of improving existing pavements. Asphalt overlays can be used to strengthen existing pavements, to reduce maintenance costs, to increase pavement life, to provide a smoother ride, and to improve skid resistance.
Initial Ship Design Using a Pearson Correlation Coefficient and Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Moon, Byung Young; Kim, Soo Young; Kang, Gyung Ju
In this paper we analyzed correlation between geometrical character and resistance, and effective horse power by using Pearson correlation coefficient which is one of the data mining methods. Also we made input data to ship's geometrical character which has strong correlation with output data. We calculated effective horse power and resistance by using Neuro-Fuzzy system. To verify the calculation, 9 of 11 container ships' data were improved as data of Neuro-Fuzzy system and the others were improved as verification data. After analyzing rate of error between existing data and calculation data, we concluded that calculation data have sound agreement with existing data.
Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.
2012-01-01
Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.
New Internet search volume-based weighting method for integrating various environmental impacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less
Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip
2018-06-01
Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Graphical correlation of gaging-station records
Searcy, James K.
1960-01-01
A gaging-station record is a sample of the rate of flow of a stream at a given site. This sample can be used to estimate the magnitude and distribution of future flows if the record is long enough to be representative of the long-term flow of the stream. The reliability of a short-term record for estimating future flow characteristics can be improved through correlation with a long-term record. Correlation can be either numerical or graphical, but graphical correlation of gaging-station records has several advantages. The graphical correlation method is described in a step-by-step procedure with an illustrative problem of simple correlation, illustrative problems of three examples of multiple correlation--removing seasonal effect--and two examples of correlation of one record with two other records. Except in the problem on removal of seasonal effect, the same group of stations is used in the illustrative problems. The purpose of the problems is to illustrate the method--not to show the improvement that can result from multiple correlation as compared with simple correlation. Hydrologic factors determine whether a usable relation exists between gaging-station records. Statistics is only a tool for evaluating and using an existing relation, and the investigator must be guided by a knowledge of hydrology.
Ndabarora, Eléazar; Mchunu, Gugu
2014-01-01
Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes.
Local Descriptors of Dynamic and Nondynamic Correlation.
Ramos-Cordoba, Eloy; Matito, Eduard
2017-06-13
Quantitatively accurate electronic structure calculations rely on the proper description of electron correlation. A judicious choice of the approximate quantum chemistry method depends upon the importance of dynamic and nondynamic correlation, which is usually assesed by scalar measures. Existing measures of electron correlation do not consider separately the regions of the Cartesian space where dynamic or nondynamic correlation are most important. We introduce real-space descriptors of dynamic and nondynamic electron correlation that admit orbital decomposition. Integration of the local descriptors yields global numbers that can be used to quantify dynamic and nondynamic correlation. Illustrative examples over different chemical systems with varying electron correlation regimes are used to demonstrate the capabilities of the local descriptors. Since the expressions only require orbitals and occupation numbers, they can be readily applied in the context of local correlation methods, hybrid methods, density matrix functional theory, and fractional-occupancy density functional theory.
Mathematical correlation of modal-parameter-identification methods via system-realization theory
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
1987-01-01
A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
Seeing and Reading Red: Hue and Color-word Correlation in Images and Attendant Text on the WWW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newsam, S
2004-07-12
This work represents an initial investigation into determining whether correlations actually exist between metadata and content descriptors in multimedia datasets. We provide a quantitative method for evaluating whether the hue of images on the WWW is correlated with the occurrence of color-words in metadata such as URLs, image names, and attendant text. It turns out that such a correlation does exist: the likelihood that a particular color appears in an image whose URL, name, and/or attendant text contains the corresponding color-word is generally at least twice the likelihood that the color appears in a randomly chosen image on the WWW.more » While this finding might not be significant in and of itself, it represents an initial step towards quantitatively establishing that other, perhaps more useful correlations exist. These correlations form the basis for exciting novel approaches that leverage semi-supervised datasets, such as the WWW, to overcome the semantic gap that has hampered progress in multimedia information retrieval for some time now.« less
A method to classify schizophrenia using inter-task spatial correlations of functional brain images.
Michael, Andrew M; Calhoun, Vince D; Andreasen, Nancy C; Baum, Stefi A
2008-01-01
The clinical heterogeneity of schizophrenia (scz) and the overlap of self reported and observed symptoms with other mental disorders makes its diagnosis a difficult task. At present no laboratory-based or image-based diagnostic tool for scz exists and such tools are desired to support existing methods for more precise diagnosis. Functional magnetic resonance imaging (fMRI) is currently employed to identify and correlate cognitive processes related to scz and its symptoms. Fusion of multiple fMRI tasks that probe different cognitive processes may help to better understand hidden networks of this complex disorder. In this paper we utilize three different fMRI tasks and introduce an approach to classify subjects based on inter-task spatial correlations of brain activation. The technique was applied to groups of patients and controls and its validity was checked with the leave-one-out method. We show that the classification rate increases when information from multiple tasks are combined.
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-08-27
The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.
Precise Relative Earthquake Magnitudes from Cross Correlation
Cleveland, K. Michael; Ammon, Charles J.
2015-04-21
We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.
Mathematical correlation of modal parameter identification methods via system realization theory
NASA Technical Reports Server (NTRS)
Juang, J. N.
1986-01-01
A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.
Steps toward a Technology for the Diffusion of Innovations.
ERIC Educational Resources Information Center
Stolz, Stephanie B.
Research-based technologies for solving problems currently exist but are not being widely implemented. Although user variables, program effectiveness, and political considerations have been documented as correlates of implementation, general non-implementation of the technology still exists, due to a lack of methods. A technology of dissemination…
Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-01-01
Background The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. Objective This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). Methods A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. Results The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Conclusions The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion. PMID:23981848
Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC2), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible. PMID:29666661
Xu, Jiucheng; Mu, Huiyu; Wang, Yun; Huang, Fangzhou
2018-01-01
The selection of feature genes with high recognition ability from the gene expression profiles has gained great significance in biology. However, most of the existing methods have a high time complexity and poor classification performance. Motivated by this, an effective feature selection method, called supervised locally linear embedding and Spearman's rank correlation coefficient (SLLE-SC 2 ), is proposed which is based on the concept of locally linear embedding and correlation coefficient algorithms. Supervised locally linear embedding takes into account class label information and improves the classification performance. Furthermore, Spearman's rank correlation coefficient is used to remove the coexpression genes. The experiment results obtained on four public tumor microarray datasets illustrate that our method is valid and feasible.
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
Pulse transmission receiver with higher-order time derivative pulse correlator
Dress, Jr., William B.; Smith, Stephen F.
2003-09-16
Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission receiver includes: a higher-order time derivative pulse correlator; a demodulation decoder coupled to the higher-order time derivative pulse correlator; a clock coupled to the demodulation decoder; and a pseudorandom polynomial generator coupled to both the higher-order time derivative pulse correlator and the clock. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodenbush, C.M.; Viswanath, D.S.; Hsieh, F.H.
Data on thermal conductivity of liquids, as a function of temperature, are essential in the design of heat- and mass- transfer equipment. A number of correlations have been developed to predict thermal conductivity of liquids with limited success. Among the correlations proposed so far, only the correlation due to Nagvekar and Daubert is based on group contributions. In this paper, a new group contribution method is developed based on the Klaas and Viswanath method for prediction of thermal conductivity of liquids and the results are compared to the method of Nagvekar and Daubert and other existing correlations. The present methodmore » predicts thermal conductivity of some 228 liquids that encompass 1487 experimental data points with an average absolute deviation of 2.5%. The group contribution method is used to examine the temperature dependence of Prandtl number for vegetable oils.« less
Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps
NASA Technical Reports Server (NTRS)
Hord, J.
1974-01-01
The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.
Density scaling for multiplets
NASA Astrophysics Data System (ADS)
Nagy, Á.
2011-02-01
Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.
Open-source platform to benchmark fingerprints for ligand-based virtual screening
2013-01-01
Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
Long-range correlation and market segmentation in bond market
NASA Astrophysics Data System (ADS)
Wang, Zhongxing; Yan, Yan; Chen, Xiaosong
2017-09-01
This paper investigates the long-range auto-correlations and cross-correlations in bond market. Based on Detrended Moving Average (DMA) method, empirical results present a clear evidence of long-range persistence that exists in one year scale. The degree of long-range correlation related to maturities has an upward tendency with a peak in short term. These findings confirm the expectations of fractal market hypothesis (FMH). Furthermore, we have developed a method based on a complex network to study the long-range cross-correlation structure and applied it to our data, and found a clear pattern of market segmentation in the long run. We also detected the nature of long-range correlation in the sub-period 2007-2012 and 2011-2016. The result from our research shows that long-range auto-correlations are decreasing in the recent years while long-range cross-correlations are strengthening.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.
Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui
2018-02-01
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.
ERIC Educational Resources Information Center
Nock, Matthew K.; Kazdin, Alan E.; Hiripi, Eva; Kessler, Ronald C.
2007-01-01
Background: Oppositional defiant disorder (ODD) is a leading cause of referral for youth mental health services; yet, many uncertainties exist about ODD given it is rarely examined as a distinct psychiatric disorder. We examined the lifetime prevalence, onset, persistence, and correlates of ODD. Methods: Lifetime prevalence of ODD and 18 other…
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.
A hidden two-locus disease association pattern in genome-wide association studies
2011-01-01
Background Recent association analyses in genome-wide association studies (GWAS) mainly focus on single-locus association tests (marginal tests) and two-locus interaction detections. These analysis methods have provided strong evidence of associations between genetics variances and complex diseases. However, there exists a type of association pattern, which often occurs within local regions in the genome and is unlikely to be detected by either marginal tests or interaction tests. This association pattern involves a group of correlated single-nucleotide polymorphisms (SNPs). The correlation among SNPs can lead to weak marginal effects and the interaction does not play a role in this association pattern. This phenomenon is due to the existence of unfaithfulness: the marginal effects of correlated SNPs do not express their significant joint effects faithfully due to the correlation cancelation. Results In this paper, we develop a computational method to detect this association pattern masked by unfaithfulness. We have applied our method to analyze seven data sets from the Wellcome Trust Case Control Consortium (WTCCC). The analysis for each data set takes about one week to finish the examination of all pairs of SNPs. Based on the empirical result of these real data, we show that this type of association masked by unfaithfulness widely exists in GWAS. Conclusions These newly identified associations enrich the discoveries of GWAS, which may provide new insights both in the analysis of tagSNPs and in the experiment design of GWAS. Since these associations may be easily missed by existing analysis tools, we can only connect some of them to publicly available findings from other association studies. As independent data set is limited at this moment, we also have difficulties to replicate these findings. More biological implications need further investigation. Availability The software is freely available at http://bioinformatics.ust.hk/hidden_pattern_finder.zip. PMID:21569557
Integrated Modeling of Themes, Targeting Claims and Networks in Insurgent Rhetoric
2016-06-09
axis range. In addition , the correlation r and p- value p are shown. The first plot below is for the LIB issue which aligns with Network Dimension 1...set to one). Additional constraints on the value are made by the various methods. λ has dimensions N×N. If the method yields an issue-dependent LOA...existence of a statistically significant correlation between the components of the first eigenvector 1u and the node variable values ix would provide
Robust Statistical Detection of Power-Law Cross-Correlation.
Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert
2016-06-02
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.
Robust Statistical Detection of Power-Law Cross-Correlation
Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert
2016-01-01
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
NASA Astrophysics Data System (ADS)
Karima, H. R.; Majidi, M. A.
2018-04-01
Excitons, quasiparticles associated with bound states between an electron and a hole and are typically created when photons with a suitable energy are absorbed in a solid-state material. We propose to study a possible emergence of excitons, created not by photon absorption but the effect of strong electronic correlations. This study is motivated by a recent experimental study of a substrate material SrTiO3 (STO) that reveals strong exitonic signals in its optical conductivity. Here we conjecture that some excitons may already exist in the ground state as a result of the electronic correlations before the additional excitons being created later by photon absorption. To investigate the existence of excitons in the ground state, we propose to study a simple 4-energy-level model that mimics a situation in strongly-correlated semiconductors. The four levels are divided into two groups, lower and upper groups separated by an energy gap, Eg , mimicking the valence and the conduction bands, respectively. Further, we incorporate repulsive Coulomb interactions between the electrons. The model is then solved by exact diagonalization method. Our result shows that the toy model can demonstrate band gap widening or narrowing and the existence of exciton in the ground state depending on interaction parameter values.
Chertkov, Michael; Gabitov, Ildar
2004-03-02
The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.
Multiscale Detrended Cross-Correlation Analysis of STOCK Markets
NASA Astrophysics Data System (ADS)
Yin, Yi; Shang, Pengjian
2014-06-01
In this paper, we employ the detrended cross-correlation analysis (DCCA) to investigate the cross-correlations between different stock markets. We report the results of cross-correlated behaviors in US, Chinese and European stock markets in period 1997-2012 by using DCCA method. The DCCA shows the cross-correlated behaviors of intra-regional and inter-regional stock markets in the short and long term which display the similarities and differences of cross-correlated behaviors simply and roughly and the persistence of cross-correlated behaviors of fluctuations. Then, because of the limitation and inapplicability of DCCA method, we propose multiscale detrended cross-correlation analysis (MSDCCA) method to avoid "a priori" selecting the ranges of scales over which two coefficients of the classical DCCA method are identified, and employ MSDCCA to reanalyze these cross-correlations to exhibit some important details such as the existence and position of minimum, maximum and bimodal distribution which are lost if the scale structure is described by two coefficients only and essential differences and similarities in the scale structures of cross-correlation of intra-regional and inter-regional markets. More statistical characteristics of cross-correlation obtained by MSDCCA method help us to understand how two different stock markets influence each other and to analyze the influence from thus two inter-regional markets on the cross-correlation in detail, thus we get a richer and more detailed knowledge of the complex evolutions of dynamics of the cross-correlations between stock markets. The application of MSDCCA is important to promote our understanding of the internal mechanisms and structures of financial markets and helps to forecast the stock indices based on our current results demonstrated the cross-correlations between stock indices. We also discuss the MSDCCA methods of secant rolling window with different sizes and, lastly, provide some relevant implications and issue.
The Relationship of Grade Span in 9th Grade to Math Achievement in High School
ERIC Educational Resources Information Center
West, John; Miller, Mary Lou; Myers, Jim; Norton, Timothy
2015-01-01
Purpose, Scope, and Method of Study: The purpose of this study was to determine if a correlation exists between grade span for ninth grade and gains in math achievement test scores in 10th grade and 12th grade. A quantitative, longitudinal, correlational research design was employed to investigate the research questions. The population was high…
Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System
NASA Astrophysics Data System (ADS)
Kwon, S.; Son, S.; Lee, K.
2018-04-01
In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.
Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia
2016-01-01
An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996
Correlation functions in first-order phase transitions
NASA Astrophysics Data System (ADS)
Garrido, V.; Crespo, D.
1997-09-01
Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.
CORRELATION ANALYSIS BETWEEN TIBET AS-γ TeV COSMIC RAY AND WMAP NINE-YEAR DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Qian-Qing; Zhang, Shuang-Nan, E-mail: zhangsn@ihep.ac.cn
2015-08-01
The WMAP team subtracted template-based foreground models to produce foreground-reduced maps, and masked point sources and uncertain sky regions directly; however, whether foreground residuals exist in the WMAP foreground-reduced maps is still an open question. Here, we use Pearson correlation coefficient analysis with AS-γ TeV cosmic ray (CR) data to probe possible foreground residuals in the WMAP nine-year data. The correlation results between the CR and foreground-contained maps (WMAP foreground-unreduced maps, WMAP template-based, and Maximum Entropy Method foreground models) suggest that: (1) CRs can trace foregrounds in the WMAP data; (2) at least some TeV CRs originate from the Milkymore » Way; (3) foregrounds may be related to the existence of CR anisotropy (loss-cone and tail-in structures); (4) there exist differences among different types of foregrounds in the decl. range of <15°. Then, we generate 10,000 mock cosmic microwave background (CMB) sky maps to describe the cosmic variance, which is used to measure the effect of the fluctuations of all possible CMB maps to the correlations between CR and CMB maps. Finally, we do correlation analysis between the CR and WMAP foreground-reduced maps, and find that: (1) there are significant anticorrelations; and (2) the WMAP foreground-reduced maps are credible. However, the significant anticorrelations may be accidental, and the higher signal-to-noise ratio Planck SMICA map cannot reject the hypothesis of accidental correlations. We therefore can only conclude that the foreground residuals exist with ∼95% probability.« less
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
On an additive partial correlation operator and nonparametric estimation of graphical models.
Lee, Kuang-Yao; Li, Bing; Zhao, Hongyu
2016-09-01
We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance.
On an additive partial correlation operator and nonparametric estimation of graphical models
Li, Bing; Zhao, Hongyu
2016-01-01
Abstract We introduce an additive partial correlation operator as an extension of partial correlation to the nonlinear setting, and use it to develop a new estimator for nonparametric graphical models. Our graphical models are based on additive conditional independence, a statistical relation that captures the spirit of conditional independence without having to resort to high-dimensional kernels for its estimation. The additive partial correlation operator completely characterizes additive conditional independence, and has the additional advantage of putting marginal variation on appropriate scales when evaluating interdependence, which leads to more accurate statistical inference. We establish the consistency of the proposed estimator. Through simulation experiments and analysis of the DREAM4 Challenge dataset, we demonstrate that our method performs better than existing methods in cases where the Gaussian or copula Gaussian assumption does not hold, and that a more appropriate scaling for our method further enhances its performance. PMID:29422689
NASA Technical Reports Server (NTRS)
Bernacca, P. L.
1971-01-01
The correlation between the equatorial velocities of the components of double stars is studied from a statistical standpoint. A theory of rotational correlation is developed and discussed with regard to its applicability to existing observations. The theory is then applied to a sample of visual binaries which are the least studied for rotational coupling. Consideration of eclipsing systems and spectroscopic binaries is limited to show how the degrees of freedom in the spin parallelism problem can be reduced. The analysis lends support to the existence of synchronism in closely spaced binaries.
Structural Genomics: Correlation Blocks, Population Structure, and Genome Architecture
Hu, Xin-Sheng; Yeh, Francis C.; Wang, Zhiquan
2011-01-01
An integration of the pattern of genome-wide inter-site associations with evolutionary forces is important for gaining insights into the genomic evolution in natural or artificial populations. Here, we assess the inter-site correlation blocks and their distributions along chromosomes. A correlation block is broadly termed as the DNA segment within which strong correlations exist between genetic diversities at any two sites. We bring together the population genetic structure and the genomic diversity structure that have been independently built on different scales and synthesize the existing theories and methods for characterizing genomic structure at the population level. We discuss how population structure could shape correlation blocks and their patterns within and between populations. Effects of evolutionary forces (selection, migration, genetic drift, and mutation) on the pattern of genome-wide correlation blocks are discussed. In eukaryote organisms, we briefly discuss the associations between the pattern of correlation blocks and genome assembly features in eukaryote organisms, including the impacts of multigene family, the perturbation of transposable elements, and the repetitive nongenic sequences and GC-rich isochores. Our reviews suggest that the observable pattern of correlation blocks can refine our understanding of the ecological and evolutionary processes underlying the genomic evolution at the population level. PMID:21886455
Analyzing the security of an existing computer system
NASA Technical Reports Server (NTRS)
Bishop, M.
1986-01-01
Most work concerning secure computer systems has dealt with the design, verification, and implementation of provably secure computer systems, or has explored ways of making existing computer systems more secure. The problem of locating security holes in existing systems has received considerably less attention; methods generally rely on thought experiments as a critical step in the procedure. The difficulty is that such experiments require that a large amount of information be available in a format that makes correlating the details of various programs straightforward. This paper describes a method of providing such a basis for the thought experiment by writing a special manual for parts of the operating system, system programs, and library subroutines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
Time synchronization and event time correlation are important in wireless sensor networks. In particular, time is used to create a sequence events or time line to answer questions of cause and effect. Time is also used as a basis for determining the freshness of received packets and the validity of cryptographic certificates. This paper presents secure method of time synchronization and event time correlation for TESLA-based hierarchical wireless sensor networks. The method demonstrates that events in a TESLA network can be accurately timestamped by adding only a few pieces of data to the existing protocol.
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
Yokoyama, Kazuhiko; Itoman, Moritoshi; Uchino, Masataka; Fukushima, Kensuke; Nitta, Hiroshi; Kojima, Yoshiaki
2008-10-01
The purpose of this study was to evaluate contributing factors affecting deep infection and fracture healing of open tibia fractures treated with locked intramedullary nailing (IMN) by multivariate analysis. We examined 99 open tibial fractures (98 patients) treated with immediate or delayed locked IMN in static fashion from 1991 to 2002. Multivariate analyses following univariate analyses were derived to determine predictors of deep infection, nonunion, and healing time to union. The following predictive variables of deep infection were selected for analysis: age, sex, Gustilo type, fracture grade by AO type, fracture location, timing or method of IMN, reamed or unreamed nailing, debridement time (< or =6 h or >6 h), method of soft-tissue management, skin closure time (< or =1 week or >1 week), existence of polytrauma (ISS< 18 or ISS> or =18), existence of floating knee injury, and existence of superficial/pin site infection. The predictive variables of nonunion selected for analysis was the same as those for deep infection, with the addition of deep infection for exchange of pin site infection. The predictive variables of union time selected for analysis was the same as those for nonunion, excluding of location, debridement time, and existence of floating knee and superficial infection. Six (6.1%; type II Gustilo n=1, type IIIB Gustilo n=5) of the 99 open tibial fractures developed deep infections. Multivariate analysis revealed that timing or method of IMN, debridement time, method of soft-tissue management, and existence of superficial or pin site infection significantly correlated with the occurrence of deep infection (P< 0.0001). In the immediate nailing group alone, the deep infection rate in type IIIB + IIIC was significantly higher than those in type I + II and IIIA (P = 0.016). Nonunion occurred in 17 fractures (20.3%, 17/84). Multivariate analysis revealed that Gustilo type, skin closure time, and existence of deep infection significantly correlated with occurrence of nonunion (P < 0.05). Gustilo type and existence of deep infection were significantly correlated with healing time to union on multivariate analysis (r(2) = 0.263, P = 0.0001). Multivariate analyses for open tibial fractures treated with IMN showed that IMN after EF (especially in existence of pin site infection) was at high risk of deep infection, and that debridement within 6 h and appropriate soft-tissue managements were also important factor in preventing deep infections. These analyses postulated that both the Gustilo type and the existence of deep infection is related with fracture healing in open fractures treated with IMN. In addition, immediate IMN for type IIIB and IIIC is potentially risky, and canal reaming did not increase the risk of complication for open tibial fractures treated with IMN.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
Multifractal analysis of the Korean agricultural market
NASA Astrophysics Data System (ADS)
Kim, Hongseok; Oh, Gabjin; Kim, Seunghwan
2011-11-01
We have studied the long-term memory effects of the Korean agricultural market using the detrended fluctuation analysis (DFA) method. In general, the return time series of various financial data, including stock indices, foreign exchange rates, and commodity prices, are uncorrelated in time, while the volatility time series are strongly correlated. However, we found that the return time series of Korean agricultural commodity prices are anti-correlated in time, while the volatility time series are correlated. The n-point correlations of time series were also examined, and it was found that a multifractal structure exists in Korean agricultural market prices.
Leak detection using structure-borne noise
NASA Technical Reports Server (NTRS)
Holland, Stephen D. (Inventor); Roberts, Ronald A. (Inventor); Chimenti, Dale E. (Inventor)
2010-01-01
A method for detection and location of air leaks in a pressure vessel, such as a spacecraft, includes sensing structure-borne ultrasound waveforms associated with turbulence caused by a leak from a plurality of sensors and cross correlating the waveforms to determine existence and location of the leak. Different configurations of sensors and corresponding methods can be used. An apparatus for performing the methods is also provided.
Methods for the Joint Meta-Analysis of Multiple Tests
ERIC Educational Resources Information Center
Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.
2014-01-01
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Density matrix renormalization group study of Y-junction spin systems
NASA Astrophysics Data System (ADS)
Guo, Haihui
Junction systems are important to understand both from the fundamental and the practical point of view, as they are essential components in existing and future electronic and spintronic devices. With the continuous advance of technology, device size will eventual reach the atomic scale. Some of the most interesting and useful junction systems will be strongly correlated. We chose the Density Matrix Renormalization Group method to study two types of Y-junction systems, the Y and YDelta junctions, on strongly correlated spin chains. With new ideas coming from the quantum information field, we have made a very efficient. Y-junction DMRG algorithm, which improves the overall CUB cost from O(m6) to O(m4), where m is the number of states kept per block. We studied the ground state properties, the correlation length, and investigated the degeneracy problem on the Y and YDelta junctions. For the excited states, we researched the existence of magnon bound states for various conditions, and have shown that the bound state exists when the central coupling constant is small.
A powerful score-based test statistic for detecting gene-gene co-association.
Xu, Jing; Yuan, Zhongshang; Ji, Jiadong; Zhang, Xiaoshuai; Li, Hongkai; Wu, Xuesen; Xue, Fuzhong; Liu, Yanxun
2016-01-29
The genetic variants identified by Genome-wide association study (GWAS) can only account for a small proportion of the total heritability for complex disease. The existence of gene-gene joint effects which contains the main effects and their co-association is one of the possible explanations for the "missing heritability" problems. Gene-gene co-association refers to the extent to which the joint effects of two genes differ from the main effects, not only due to the traditional interaction under nearly independent condition but the correlation between genes. Generally, genes tend to work collaboratively within specific pathway or network contributing to the disease and the specific disease-associated locus will often be highly correlated (e.g. single nucleotide polymorphisms (SNPs) in linkage disequilibrium). Therefore, we proposed a novel score-based statistic (SBS) as a gene-based method for detecting gene-gene co-association. Various simulations illustrate that, under different sample sizes, marginal effects of causal SNPs and co-association levels, the proposed SBS has the better performance than other existed methods including single SNP-based and principle component analysis (PCA)-based logistic regression model, the statistics based on canonical correlations (CCU), kernel canonical correlation analysis (KCCU), partial least squares path modeling (PLSPM) and delta-square (δ (2)) statistic. The real data analysis of rheumatoid arthritis (RA) further confirmed its advantages in practice. SBS is a powerful and efficient gene-based method for detecting gene-gene co-association.
Analysis of network clustering behavior of the Chinese stock market
NASA Astrophysics Data System (ADS)
Chen, Huan; Mai, Yong; Li, Sai-Ping
2014-11-01
Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.
Susceptibility Testing of Medically Important Parasites.
Genetu Bayih, Abebe; Debnath, Anjan; Mitre, Edward; Huston, Christopher D; Laleu, Benoît; Leroy, Didier; Blasco, Benjamin; Campo, Brice; Wells, Timothy N C; Willis, Paul A; Sjö, Peter; Van Voorhis, Wesley C; Pillai, Dylan R
2017-07-01
In the last 2 decades, renewed attention to neglected tropical diseases (NTDs) has spurred the development of antiparasitic agents, especially in light of emerging drug resistance. The need for new drugs has required in vitro screening methods using parasite culture. Furthermore, clinical laboratories sought to correlate in vitro susceptibility methods with treatment outcomes, most notably with malaria. Parasites with their various life cycles present greater complexity than bacteria, for which standardized susceptibility methods exist. This review catalogs the state-of-the-art methodologies used to evaluate the effects of drugs on key human parasites from the point of view of drug discovery as well as the need for laboratory methods that correlate with clinical outcomes. Copyright © 2017 American Society for Microbiology.
NASA Astrophysics Data System (ADS)
Lu, Feng; Liu, Kang; Duan, Yingying; Cheng, Shifen; Du, Fei
2018-07-01
A better characterization of the traffic influence among urban roads is crucial for traffic control and traffic forecasting. The existence of spatial heterogeneity imposes great influence on modeling the extent and degree of road traffic correlation, which is usually neglected by the traditional distance based method. In this paper, we propose a traffic-enhanced community detection approach to spatially reveal the traffic correlation in city road networks. First, the road network is modeled as a traffic-enhanced dual graph with the closeness between two road segments determined not only by their topological connection, but also by the traffic correlation between them. Then a flow-based community detection algorithm called Infomap is utilized to identify the road segment clusters. Evaluated by Moran's I, Calinski-Harabaz Index and the traffic interpolation application, we find that compared to the distance based method and the community based method, our proposed traffic-enhanced community based method behaves better in capturing the extent of traffic relevance as both the topological structure of the road network and the traffic correlations among urban roads are considered. It can be used in more traffic-related applications, such as traffic forecasting, traffic control and guidance.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
A cephalometric study to determine the plane of occlusion in completely edentulous patients.
Hindocha, Amit D; Vartak, Vikas N; Bhandari, Aruna J; Dudani, Mohit T
2013-01-01
Determination of the plane of occlusion in completely edentulous patients with the help of the ala-tragus line (Camper's plane) may be questioned. An attempt to devise an alternative method to determine the orientation of the plane of occlusion was made. Cephalometric analysis was used to identify whether a correlation exists between the plane of occlusion of dentulous Indian individuals and other stable cranial landmarks. A negative correlation was found to exist between the occlusal Plane-FH plane angle and the porion-nasion-anterior nasal spine (PoNANS) angle. From the derived mathematical correlation, it was concluded that the angulation of the occlusal plane in completely edentulous subjects may be determined by taking a cephalogram at the diagnostic stage. Further, the clinical applicability of the derived mathematical formula (while determining the plane of occlusion) was tested on completely edentulous patients.
Multifractal detrended cross-correlation analysis in the MENA area
NASA Astrophysics Data System (ADS)
El Alaoui, Marwane; Benbachir, Saâd
2013-12-01
In this paper, we investigated multifractal cross-correlations qualitatively and quantitatively using a cross-correlation test and the Multifractal detrended cross-correlation analysis method (MF-DCCA) for markets in the MENA area. We used cross-correlation coefficients to measure the level of this correlation. The analysis concerns four stock market indices of Morocco, Tunisia, Egypt and Jordan. The countries chosen are signatory of the Agadir agreement concerning the establishment of a free trade area comprising Arab Mediterranean countries. We computed the bivariate generalized Hurst exponent, Rényi exponent and spectrum of singularity for each pair of indices to measure quantitatively the cross-correlations. By analyzing the results, we found the existence of multifractal cross-correlations between all of these markets. We compared the spectrum width of these indices; we also found which pair of indices has a strong multifractal cross-correlation.
Further studies using matched filter theory and stochastic simulation for gust loads prediction
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii
1993-01-01
This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.
NASA Astrophysics Data System (ADS)
Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko
Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.
NASA Astrophysics Data System (ADS)
Hu, Shunren; Chen, Weimin; Liu, Lin; Gao, Xiaoxia
2010-03-01
Bridge structural health monitoring system is a typical multi-sensor measurement system due to the multi-parameters of bridge structure collected from the monitoring sites on the river-spanning bridges. Bridge structure monitored by multi-sensors is an entity, when subjected to external action; there will be different performances to different bridge structure parameters. Therefore, the data acquired by each sensor should exist countless correlation relation. However, complexity of the correlation relation is decided by complexity of bridge structure. Traditionally correlation analysis among monitoring sites is mainly considered from physical locations. unfortunately, this method is so simple that it cannot describe the correlation in detail. The paper analyzes the correlation among the bridge monitoring sites according to the bridge structural data, defines the correlation of bridge monitoring sites and describes its several forms, then integrating the correlative theory of data mining and signal system to establish the correlation model to describe the correlation among the bridge monitoring sites quantificationally. Finally, The Chongqing Mashangxi Yangtze river bridge health measurement system is regards as research object to diagnosis sensors fault, and simulation results verify the effectiveness of the designed method and theoretical discussions.
Determine the Compressive Strength of Calcium Silicate Bricks by Combined Nondestructive Method
2014-01-01
The paper deals with the application of combined nondestructive method for assessment of compressive strength of calcium silicate bricks. In this case, it is a combination of the rebound hammer method and ultrasonic pulse method. Calibration relationships for determining compressive strength of calcium silicate bricks obtained from nondestructive parameter testing for the combined method as well as for the L-type Schmidt rebound hammer and ultrasonic pulse method are quoted here. Calibration relationships are known for their close correlation and are applicable in practice. The highest correlation between parameters from nondestructive measurement and predicted compressive strength is obtained using the SonReb combined nondestructive method. Combined nondestructive SonReb method was proved applicable for determination of compressive strength of calcium silicate bricks at checking tests in a production plant and for evaluation of bricks built in existing masonry structures. PMID:25276864
Yoshie, Ayano; Kanda, Ayato; Nakamura, Takahiro; Igusa, Hisao; Hara, Setsuko
2009-01-01
Although there are various determination methods for gamma -oryzanol contained in rice bran oil by absorptiometry, normal-phase HPLC, and reversed-phase HPLC, their accuracies and the correlations among them have not been revealed yet. Chloroform-containing mixed solvents are widely used as mobile phases in some HPLC methods, but researchers have been apprehensive about its use in terms of safety for the human body and the environment.In the present study, a simple and accurate determination method was developed by improving the reversed-phase HPLC method. This novel HPLC method uses methanol/acetonitrile/acetic acid (52/45/3 v/v/v), a non-chlorinated solvent, as the mobile phase, and shows an excellent linearity (y = 0.9527x + 0.1241, R(2) = 0.9974) with absorptiometry. The mean relative errors among the existing 3 methods and the novel method, determined by adding fixed amounts of gamma-oryzanol into refined rice salad oil, were -4.7% for the absorptiometry, -6.8% for the existing normal-phase HPLC, +4.6% for the existing reversed-phase HPLC, and -1.6% for the novel reversed-phase HPLC method. gamma -Oryzanol content in 12 kinds of crude rice bran oils obtained from different sources were determined by the four methods. The mean content of those oils were 1.75+/-0.18% for the absorptiometry, 1.29+/-0.11% for the existing normal-phase HPLC, 1.51+/-0.10% for the existing reversed-phase HPLC, and 1.54+/-0.19% for the novel reversed-phase HPLC method.
Correlation complementarity yields bell monogamy relations.
Kurzyński, P; Paterek, T; Ramanathan, R; Laskowski, W; Kaszlikowski, D
2011-05-06
We present a method to derive Bell monogamy relations by connecting the complementarity principle with quantum nonlocality. The resulting monogamy relations are stronger than those obtained from the no-signaling principle alone. In many cases, they yield tight quantum bounds on the amount of violation of single and multiple qubit correlation Bell inequalities. In contrast with the two-qubit case, a rich structure of possible violation patterns is shown to exist in the multipartite scenario.
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
$n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.
Wu, Yue; Hua, Zhongyun; Zhou, Yicong
2016-11-01
Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.I. Rudyka; Y.E. Zingerman; K.G. Lavrov
Up-to-date mathematical methods, such as correlation analysis and expert systems, are employed in creating a model of the coking process. Automatic coking-control systems developed by Giprokoks rule out human error. At an existing coke battery, after introducing automatic control, the heating-gas consumption is reduced by {>=}5%.
NASA Astrophysics Data System (ADS)
Farquharson, Michael J.; Bagshaw, Andrew P.; Porter, John B.; Abeysinghe, R. D.
2000-05-01
A system based on the detection of K-shell x-ray fluorescence (XRF) has been used to investigate whether a correlation exists between the concentration of iron in the skin and the concentration of iron in the liver, as the degree of iron loading increases. The motivation behind this work is to develop a non-invasive method of determining the extent of the body's iron stores via measurements on the skin, in order to monitor the efficacy of chelation therapy administered to patients with β-thalassaemia. Sprague-Dawley rats were iron loaded via injections of iron dextran and subsequently treated with the iron chelator CP94. The non-haem iron concentrations of the liver, heart and spleen were determined using bathophenanthroline sulphonate as the chromogen reagent. Samples of abdominal skin were taken and the iron concentrations determined using XRF. A strong correlation between the skin iron concentration and the liver iron concentration has been demonstrated (R2 = 0.86). Similar correlations exist for the heart and the spleen. These results show that this method holds great potential as a tool in the diagnosis and treatment of hereditary haemochromatosis and β-thalassaemia.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Methods for meta-analysis of multiple traits using GWAS summary statistics.
Ray, Debashree; Boehnke, Michael
2018-03-01
Genome-wide association studies (GWAS) for complex diseases have focused primarily on single-trait analyses for disease status and disease-related quantitative traits. For example, GWAS on risk factors for coronary artery disease analyze genetic associations of plasma lipids such as total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglycerides (TGs) separately. However, traits are often correlated and a joint analysis may yield increased statistical power for association over multiple univariate analyses. Recently several multivariate methods have been proposed that require individual-level data. Here, we develop metaUSAT (where USAT is unified score-based association test), a novel unified association test of a single genetic variant with multiple traits that uses only summary statistics from existing GWAS. Although the existing methods either perform well when most correlated traits are affected by the genetic variant in the same direction or are powerful when only a few of the correlated traits are associated, metaUSAT is designed to be robust to the association structure of correlated traits. metaUSAT does not require individual-level data and can test genetic associations of categorical and/or continuous traits. One can also use metaUSAT to analyze a single trait over multiple studies, appropriately accounting for overlapping samples, if any. metaUSAT provides an approximate asymptotic P-value for association and is computationally efficient for implementation at a genome-wide level. Simulation experiments show that metaUSAT maintains proper type-I error at low error levels. It has similar and sometimes greater power to detect association across a wide array of scenarios compared to existing methods, which are usually powerful for some specific association scenarios only. When applied to plasma lipids summary data from the METSIM and the T2D-GENES studies, metaUSAT detected genome-wide significant loci beyond the ones identified by univariate analyses. Evidence from larger studies suggest that the variants additionally detected by our test are, indeed, associated with lipid levels in humans. In summary, metaUSAT can provide novel insights into the genetic architecture of a common disease or traits. © 2017 WILEY PERIODICALS, INC.
Hammerly, Susan C; Morrow, Michael E; Johnson, Jeff A
2013-11-01
The primary goal of captive breeding programmes for endangered species is to prevent extinction, a component of which includes the preservation of genetic diversity and avoidance of inbreeding. This is typically accomplished by minimizing mean kinship in the population, thereby maintaining equal representation of the genetic founders used to initiate the captive population. If errors in the pedigree do exist, such an approach becomes less effective for minimizing inbreeding depression. In this study, both pedigree- and DNA-based methods were used to assess whether inbreeding depression existed in the captive population of the critically endangered Attwater's Prairie-chicken (Tympanuchus cupido attwateri), a subspecies of prairie grouse that has experienced a significant decline in abundance and concurrent reduction in neutral genetic diversity. When examining the captive population for signs of inbreeding, variation in pedigree-based inbreeding coefficients (f(pedigree)) was less than that obtained from DNA-based methods (f(DNA)). Mortality of chicks and adults in captivity were also positively correlated with parental relatedness (r(DNA)) and f(DNA), respectively, while no correlation was observed with pedigree-based measures when controlling for additional variables such as age, breeding facility, gender and captive/release status. Further, individual homozygosity by loci (HL) and parental rDNA values were positively correlated with adult mortality in captivity and the occurrence of a lethal congenital defect in chicks, respectively, suggesting that inbreeding may be a contributing factor increasing the frequency of this condition among Attwater's Prairie-chickens. This study highlights the importance of using DNA-based methods to better inform management decisions when pedigrees are incomplete or errors may exist due to uncertainty in pairings. © 2013 John Wiley & Sons Ltd.
Stability of phases of a square-well fluid within superposition approximation
NASA Astrophysics Data System (ADS)
Piasecki, Jarosław; Szymczak, Piotr; Kozak, John J.
2013-04-01
The analytic and numerical methods introduced previously to study the phase behavior of hard sphere fluids starting from the Yvon-Born-Green (YBG) equation under the Kirkwood superposition approximation (KSA) are adapted to the square-well fluid. We are able to show conclusively that the YBG equation under the KSA closure when applied to the square-well fluid: (i) predicts the existence of an absolute stability limit corresponding to freezing where undamped oscillations appear in the long-distance behavior of correlations, (ii) in accordance with earlier studies reveals the existence of a liquid-vapor transition by the appearance of a "near-critical region" where monotonically decaying correlations acquire very long range, although the system never loses stability.
CFD-Predicted Tile Heating Bump Factors Due to Tile Overlay Repairs
NASA Technical Reports Server (NTRS)
Lessard, Victor R.
2006-01-01
A Computational Fluid Dynamics investigation of the Orbiter's Tile Overlay Repair (TOR) is performed to assess the aeroheating Damage Assessment Team's (DAT) existing heating correlation method for protuberance interference heating on the surrounding thermal protection system. Aerothermodynamic heating analyses are performed for TORs at the design reference damage locations body points 1800 and 1075 for a Mach 17.9 and a=39deg STS-107 flight trajectory point with laminar flow. Six different cases are considered. The computed peak heating bump factor on the surrounding tiles are below the DAT's heating bump factor values for smooth tile cases. However, for the uneven tiles cases the peak interference heating is shown to be considerably higher than the existing correlation prediction.
Sector Identification in a Set of Stock Return Time Series Traded at the London Stock Exchange
NASA Astrophysics Data System (ADS)
Coronnello, C.; Tumminello, M.; Lillo, F.; Micciche, S.; Mantegna, R. N.
2005-09-01
We compare some methods recently used in the literature to detect the existence of a certain degree of common behavior of stock returns belonging to the same economic sector. Specifically, we discuss methods based on random matrix theory and hierarchical clustering techniques. We apply these methods to a portfolio of stocks traded at the London Stock Exchange. The investigated time series are recorded both at a daily time horizon and at a 5-minute time horizon. The correlation coefficient matrix is very different at different time horizons confirming that more structured correlation coefficient matrices are observed for long time horizons. All the considered methods are able to detect economic information and the presence of clusters characterized by the economic sector of stocks. However, different methods present a different degree of sensitivity with respect to different sectors. Our comparative analysis suggests that the application of just a single method could not be able to extract all the economic information present in the correlation coefficient matrix of a stock portfolio.
PageRank as a method to rank biomedical literature by importance.
Yates, Elliot J; Dixon, Louise C
2015-01-01
Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P < 0.01) and we thus validate the former as a surrogate of literature importance. Furthermore, the algorithm can be run in trivial time on cheap, commodity cluster hardware, lowering the barrier of entry for resource-limited open access organisations. PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.
NASA Astrophysics Data System (ADS)
Fan, Qingju; Wu, Yonghong
2015-08-01
In this paper, we develop a new method for the multifractal characterization of two-dimensional nonstationary signal, which is based on the detrended fluctuation analysis (DFA). By applying to two artificially generated signals of two-component ARFIMA process and binomial multifractal model, we show that the new method can reliably determine the multifractal scaling behavior of two-dimensional signal. We also illustrate the applications of this method in finance and physiology. The analyzing results exhibit that the two-dimensional signals under investigation are power-law correlations, and the electricity market consists of electricity price and trading volume is multifractal, while the two-dimensional EEG signal in sleep recorded for a single patient is weak multifractal. The new method based on the detrended fluctuation analysis may add diagnostic power to existing statistical methods.
NASA Astrophysics Data System (ADS)
Hong, Wei; Wang, Shaoping; Liu, Haokuo; Tomovic, Mileta M.; Chao, Zhang
2017-01-01
The inductive debris detection is an effective method for monitoring mechanical wear, and could be used to prevent serious accidents. However, debris detection during early phase of mechanical wear, when small debris (<100 um) is generated, requires that the sensor has high sensitivity with respect to background noise. In order to detect smaller debris by existing sensors, this paper presents a hybrid method which combines Band Pass Filter and Correlation Algorithm to improve sensor signal-to-noise ratio (SNR). The simulation results indicate that the SNR will be improved at least 2.67 times after signal processing. In other words, this method ensures debris identification when the sensor's SNR is bigger than -3 dB. Thus, smaller debris will be detected in the same SNR. Finally, effectiveness of the proposed method is experimentally validated.
NASA Astrophysics Data System (ADS)
Bekkouche, Toufik; Bouguezel, Saad
2018-03-01
We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.
NASA Astrophysics Data System (ADS)
Fitzpatrick, Matthew R. C.; Kennett, Malcolm P.
2018-05-01
We develop a formalism that allows the study of correlations in space and time in both the superfluid and Mott insulating phases of the Bose-Hubbard Model. Specifically, we obtain a two particle irreducible effective action within the contour-time formalism that allows for both equilibrium and out of equilibrium phenomena. We derive equations of motion for both the superfluid order parameter and two-point correlation functions. To assess the accuracy of this formalism, we study the equilibrium solution of the equations of motion and compare our results to existing strong coupling methods as well as exact methods where possible. We discuss applications of this formalism to out of equilibrium situations.
Summary of AH-1G flight vibration data for validation of coupled rotor-fuselage analyses
NASA Technical Reports Server (NTRS)
Dompka, R. V.; Cronkhite, J. D.
1986-01-01
Under a NASA research program designated DAMVIBS (Design Analysis Methods for VIBrationS), four U. S. helicopter industry participants (Bell Helicopter, Boeing Vertol, McDonnell Douglas Helicopter, and Sikorsky Aircraft) are to apply existing analytical methods for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. Bell Helicopter, as the manufacturer of the AH-1G, was asked to provide pertinent rotor data and to collect the OLS flight vibration data needed to perform the correlations. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM) developed by Bell which has been extensively documented and correlated with ground vibration tests.The AH-1G FEM was provided to each of the participants for use in their coupled rotor-fuselage analyses. This report describes the AH-1G OLS flight test program and provides the flight conditions and measured vibration data to be used by each participant in their correlation effort. In addition, the mechanical, structural, inertial and aerodynamic data for the AH-1G two-bladed teetering main rotor system are presented. Furthermore, modifications to the NASTRAN FEM of the fuselage structure that are necessary to make it compatible with the OLS test article are described. The AH-1G OLS flight test data was found to be well documented and provide a sound basis for evaluating currently existing analysis methods used for calculation of coupled rotor-fuselage vibrations.
3D displacement field measurement with correlation based on the micro-geometrical surface texture
NASA Astrophysics Data System (ADS)
Bubaker-Isheil, Halima; Serri, Jérôme; Fontaine, Jean-François
2011-07-01
Image correlation methods are widely used in experimental mechanics to obtain displacement field measurements. Currently, these methods are applied using digital images of the initial and deformed surfaces sprayed with black or white paint. Speckle patterns are then captured and the correlation is performed with a high degree of accuracy to an order of 0.01 pixels. In 3D, however, stereo-correlation leads to a lower degree of accuracy. Correlation techniques are based on the search for a sub-image (or pattern) displacement field. The work presented in this paper introduces a new correlation-based approach for 3D displacement field measurement that uses an additional 3D laser scanner and a CMM (Coordinate Measurement Machine). Unlike most existing methods that require the presence of markers on the observed object (such as black speckle, grids or random patterns), this approach relies solely on micro-geometrical surface textures such as waviness, roughness and aperiodic random defects. The latter are assumed to remain sufficiently small thus providing an adequate estimate of the particle displacement. The proposed approach can be used in a wide range of applications such as sheet metal forming with large strains. The method proceeds by first obtaining cloud points using the 3D laser scanner mounted on a CMM. These points are used to create 2D maps that are then correlated. In this respect, various criteria have been investigated for creating maps consisting of patterns, which facilitate the correlation procedure. Once the maps are created, the correlation between both configurations (initial and moved) is carried out using traditional methods developed for field measurements. Measurement validation was conducted using experiments in 2D and 3D with good results for rigid displacements in 2D, 3D and 2D rotations.
ERIC Educational Resources Information Center
Bond, Sarah
2017-01-01
The purpose of this mixed-methods study was to identify and describe what correlations, if any, exist between the composite Phelps Kindergarten Readiness Scale (PKRS) score, the visual-perceptual subtest of the PKRS, and reading achievement by the end of grade one. The quantitative data used in this study were the PKRS scores from 421 students…
NASA Astrophysics Data System (ADS)
Manimaran, P.; Narayana, A. C.
2018-07-01
In this paper, we study the multifractal characteristics and cross-correlation behaviour of Air Pollution Index (API) time series data through multifractal detrended cross-correlation analysis method. We analyse the daily API records of nine air pollutants of the university of Hyderabad campus for a period of three years (2013-2016). The cross-correlation behaviour has been measured from the Hurst scaling exponents and the singularity spectrum quantitatively. From the results, it is found that the cross-correlation analysis shows anti-correlation behaviour for all possible 36 bivariate time series. We also observe the existence of multifractal nature in all the bivariate time series in which many of them show strong multifractal behaviour. In particular, the hazardous particulate matter PM2.5 and inhalable particulate matter PM10 shows anti-correlated behaviour with all air pollutants.
Segmentation of time series with long-range fractal correlations.
Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P
2012-06-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.
A broken promise: microbiome differential abundance methods do not control the false discovery rate.
Hawinkel, Stijn; Mattiello, Federico; Bijnens, Luc; Thas, Olivier
2017-08-22
High-throughput sequencing technologies allow easy characterization of the human microbiome, but the statistical methods to analyze microbiome data are still in their infancy. Differential abundance methods aim at detecting associations between the abundances of bacterial species and subject grouping factors. The results of such methods are important to identify the microbiome as a prognostic or diagnostic biomarker or to demonstrate efficacy of prodrug or antibiotic drugs. Because of a lack of benchmarking studies in the microbiome field, no consensus exists on the performance of the statistical methods. We have compared a large number of popular methods through extensive parametric and nonparametric simulation as well as real data shuffling algorithms. The results are consistent over the different approaches and all point to an alarming excess of false discoveries. This raises great doubts about the reliability of discoveries in past studies and imperils reproducibility of microbiome experiments. To further improve method benchmarking, we introduce a new simulation tool that allows to generate correlated count data following any univariate count distribution; the correlation structure may be inferred from real data. Most simulation studies discard the correlation between species, but our results indicate that this correlation can negatively affect the performance of statistical methods. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Predicting aged pork quality using a portable raman device
USDA-ARS?s Scientific Manuscript database
Objectives: A need exists for a better on-line evaluation method for pork quality. Raman spectroscopy evaluates structure and composition of food samples, with advantage of being portable, non-invasive and insensitive to water. The objectives of this study were to evaluate the correlation between Ra...
DOT National Transportation Integrated Search
1968-05-01
Conditions arise during construction of bases with Portland cement stabilized soils which require close programming of work. Therefore, time is of significant importance. : That is the objective of this report; to evaluate a method by which considera...
Cross-correlations between Renminbi and four major currencies in the Renminbi currency basket
NASA Astrophysics Data System (ADS)
Wang, Gang-Jin; Xie, Chi
2013-03-01
We investigate the cross-correlations between Renminbi (CNY) and four major currencies (USD, EUR, JPY, and KRW) in the Renminbi currency basket, i.e., the cross-correlations of CNY-USD, CNY-EUR, CNY-JPY, and CNY-KRW. Qualitatively, using a statistical test in analogy to the Ljung-Box test, we find that cross-correlations significantly exist in CNY-USD, CNY-EUR, CNY-JPY, and CNY-KRW. Quantitatively, employing the detrended cross-correlation analysis (DCCA) method, we find that the cross-correlations of CNY-USD, CNY-EUR, CNY-JPY, and CNY-KRW are weakly persistent. We use the DCCA cross-correlation coefficient ρ to quantify the level of cross-correlations and find the currency weight in the Renminbi currency basket is arranged in the order of USD>EUR>JPY >KRW. Using the method of rolling windows, which can capture the time-varying cross-correlation scaling exponents, we find that: (i) CNY and USD are positively cross-correlated over time, but the cross-correlations of CNY-USD are anti-persistent during the US sub-prime crisis and the European debt crisis. (ii) The cross-correlation scaling exponents of CNY-EUR have the cyclical fluctuation with a nearly two-year cycle. (iii) CNY-JPY has long-term negative cross-correlations, during the European debt crisis, but CNY and KRW are positively cross-correlated.
Dhikav, Vikas; Duraiswamy, Sharmila; Anand, Kuljeet Singh
2017-01-01
Introduction: Hippocampus undergoes atrophy in patients with Alzheimer's disease (AD). Calculation of hippocampal volumes can be done by a variety of methods using T1-weighted images of magnetic resonance imaging (MRI) of the brain. Medial temporal lobes atrophy (MTL) can be rated visually using T1-weighted MRI brain images. The present study was done to see if any correlation existed between hippocampal volumes and visual rating scores of the MTL using Scheltens Visual Rating Method. Materials and Methods: We screened 84 subjects presented to the Department of Neurology of a Tertiary Care Hospital and enrolled forty subjects meeting the National Institute of Neurological and Communicative Disorders and Stroke, AD related Disease Association criteria. Selected patients underwent MRI brain and T1-weighted images in a plane perpendicular to long axis of hippocampus were obtained. Hippocampal volumes were calculated manually using a standard protocol. The calculated hippocampal volumes were correlated with Scheltens Visual Rating Method for Rating MTL. A total of 32 cognitively normal age-matched subjects were selected to see the same correlation in the healthy subjects as well. Sensitivity and specificity of both methods was calculated and compared. Results: There was an insignificant correlation between the hippocampal volumes and MTL rating scores in cognitively normal elderly (n = 32; Pearson Correlation coefficient = 0.16, P > 0.05). In the AD Group, there was a moderately strong correlation between measured hippocampal volumes and MTL Rating (Pearson's correlation coefficient = −0.54; P < 0.05. There was a moderately strong correlation between hippocampal volume and Mini-Mental Status Examination in the AD group. Manual delineation was superior compared to the visual method (P < 0.05). Conclusions: Good correlation was present between manual hippocampal volume measurements and MTL scores. Sensitivity and specificity of manual measurement of hippocampus was higher compared to visual rating scores for MTL in patients with AD. PMID:28298839
Community Detection for Correlation Matrices
NASA Astrophysics Data System (ADS)
MacMahon, Mel; Garlaschelli, Diego
2015-04-01
A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.
NASA Astrophysics Data System (ADS)
Mukharror, Darmawan Ahmad; Tiara Baiti, Isnaini; Ichsan, Muhammad; Pridina, Niomi; Triutami, Sanny
2017-10-01
Despite increasing academic research citation on biology, abundance, and the behavior of the blacktip reef sharks, the influence of reef fish population on the density of reef sharks: Carcharhinus melanopterus and Triaenodon obesus population in its habitat were largely unassessed. This present study examined the correlation between abundance of reef fishes family/species with the population of reef sharks in Southern Waters of Morotai Island. The existence of reef sharks was measured with the Audible Stationary Count (ASC) methods and the abundance of reef fishes was surveyed using Underwater Visual Census (UVC) combined with Diver Operated Video (DOV) census. The coefficient of Determination (R2) was used to investigate the degree of relationships between sharks and the specific reef fishes species. The research from 8th April to 4th June 2015 showed the strong positive correlations between the existence of reef sharks with abundance of reef fishes. The correlation values between Carcharhinus melanopterus/Triaenodon obesus with Chaetodon auriga was 0.9405, blacktip/whitetip reef sharks versus Ctenochaetus striatus was 0.9146, and Carcharhinus melanopterus/Triaenodon obesus to Chaetodon kleinii was 0.8440. As the shark can be worth more alive for shark diving tourism than dead in a fish market, the abundance of these reef fishes was important as an early indication parameter of shark existence in South Water of Morotai Island. In the long term, this highlights the importance of reef fishes abundance management in Morotai Island’s Waters to enable the establishment of appropriate and effective reef sharks conservation.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
On the interpretation of domain averaged Fermi hole analyses of correlated wavefunctions.
Francisco, E; Martín Pendás, A; Costales, Aurora
2014-03-14
Few methods allow for a physically sound analysis of chemical bonds in cases where electron correlation may be a relevant factor. The domain averaged Fermi hole (DAFH) analysis, a tool firstly proposed by Robert Ponec in the 1990's to provide interpretations of the chemical bonding existing between two fragments Ω and Ω' that divide the real space exhaustively, is one of them. This method allows for a partition of the delocalization index or bond order between Ω and Ω' into one electron contributions, but the chemical interpretation of its parameters has been firmly established only for single determinant wavefunctions. In this paper we report a general interpretation based on the concept of excluded density that is also valid for correlated descriptions. Both analytical models and actual computations on a set of simple molecules (H2, N2, LiH, and CO) are discussed, and a classification of the possible DAFH situations is presented. Our results show that this kind of analysis may reveal several correlated assisted bonding patterns that might be difficult to detect using other methods. In agreement with previous knowledge, we find that the effective bond order in covalent links decreases due to localization of electrons driven by Coulomb correlation.
Stojnic, Robert; Fu, Audrey Qiuyan; Adryan, Boris
2012-01-01
Inferring the combinatorial regulatory code of transcription factors (TFs) from genome-wide TF binding profiles is challenging. A major reason is that TF binding profiles significantly overlap and are therefore highly correlated. Clustered occurrence of multiple TFs at genomic sites may arise from chromatin accessibility and local cooperation between TFs, or binding sites may simply appear clustered if the profiles are generated from diverse cell populations. Overlaps in TF binding profiles may also result from measurements taken at closely related time intervals. It is thus of great interest to distinguish TFs that directly regulate gene expression from those that are indirectly associated with gene expression. Graphical models, in particular Bayesian networks, provide a powerful mathematical framework to infer different types of dependencies. However, existing methods do not perform well when the features (here: TF binding profiles) are highly correlated, when their association with the biological outcome is weak, and when the sample size is small. Here, we develop a novel computational method, the Neighbourhood Consistent PC (NCPC) algorithms, which deal with these scenarios much more effectively than existing methods do. We further present a novel graphical representation, the Direct Dependence Graph (DDGraph), to better display the complex interactions among variables. NCPC and DDGraph can also be applied to other problems involving highly correlated biological features. Both methods are implemented in the R package ddgraph, available as part of Bioconductor (http://bioconductor.org/packages/2.11/bioc/html/ddgraph.html). Applied to real data, our method identified TFs that specify different classes of cis-regulatory modules (CRMs) in Drosophila mesoderm differentiation. Our analysis also found depletion of the early transcription factor Twist binding at the CRMs regulating expression in visceral and somatic muscle cells at later stages, which suggests a CRM-specific repression mechanism that so far has not been characterised for this class of mesodermal CRMs. PMID:23144600
CCLasso: correlation inference for compositional data through Lasso.
Fang, Huaying; Huang, Chengcheng; Zhao, Hongyu; Deng, Minghua
2015-10-01
Direct analysis of microbial communities in the environment and human body has become more convenient and reliable owing to the advancements of high-throughput sequencing techniques for 16S rRNA gene profiling. Inferring the correlation relationship among members of microbial communities is of fundamental importance for genomic survey study. Traditional Pearson correlation analysis treating the observed data as absolute abundances of the microbes may lead to spurious results because the data only represent relative abundances. Special care and appropriate methods are required prior to correlation analysis for these compositional data. In this article, we first discuss the correlation definition of latent variables for compositional data. We then propose a novel method called CCLasso based on least squares with [Formula: see text] penalty to infer the correlation network for latent variables of compositional data from metagenomic data. An effective alternating direction algorithm from augmented Lagrangian method is used to solve the optimization problem. The simulation results show that CCLasso outperforms existing methods, e.g. SparCC, in edge recovery for compositional data. It also compares well with SparCC in estimating correlation network of microbe species from the Human Microbiome Project. CCLasso is open source and freely available from https://github.com/huayingfang/CCLasso under GNU LGPL v3. dengmh@pku.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Max-margin multiattribute learning with low-rank constraint.
Zhang, Qiang; Chen, Lin; Li, Baoxin
2014-07-01
Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.
DISSCO: direct imputation of summary statistics allowing covariates
Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun
2015-01-01
Background: Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. Methods: We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). Results: We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9–15.2% for variants with minor allele frequency <5%. Availability and implementation: http://www.unc.edu/∼yunmli/DISSCO. Contact: yunli@med.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25810429
Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun
2016-01-01
The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882
Curing conditions to inactivate Trichinella spiralis muscle larvae in ready-to-eat pork sausage
USDA-ARS?s Scientific Manuscript database
Curing processes for ready to eat (RTE) pork products currently require individual validation of methods to demonstrate inactivation of Trichinella spiralis. This is a major undertaking for each process; currently no model of meat chemistry exists that can be correlated with inactivation of Trichin...
Nayak, US Krishna; Hegde, Gautam
2010-01-01
Background and objectives Orthodontic diagnosis and treatment planning for growing children must involve growth prediction, especially in the treatment of skeletal problems. Studies have shown that a strong association exists between skeletal maturity and dental calcification stages. The present study was therefore taken up to provide a simple and practical method for assessing skeletal maturity using a dental periapical film and standard dental X-ray machine, to compare the developmental stages of the mandibular canine with that of developmental stages of modified MP3 and to find out if any correlation exists, to determine if the developmental stages of the mandibular canine alone can be used as a reliable indicator for assessment of skeletal maturity. Methods A total of 160 periapical radiographs (80 males and 80 females), of the mandibular right canine and the MP3 region was taken and assessed according to the Dermirjian’s stages of dental calcification and the modified MP3 stages. Results The correlation between the developmental stages of MP3 and the mandibular right canine in male and female groups, is of high statistical significance (p = 0.001). The correlation coefficient between MP3 stages and developmental stages of mandibular canine and chronological age in male and females was found to be not significant. Conclusions The correlation between the mandibular canine calcification stages and MP3 stages was found to be significant. The developmental stages of the mandibular canine could be used very reliably as a sole indicator for assessment of skeletal maturity. PMID:27625553
Hegde, Gautham; Hegde, Nanditha; Kumar, Anil; Keshavaraj
2014-01-01
Objective: Orthodontic diagnosis and treatment planning for growing children must involve growth prediction, especially in the treatment of skeletal problems. Studies have shown that a strong association exists between skeletal maturity and dental calcification stages. The present study was therefore taken up to provide a simple and practical method for assessing skeletal maturity using a dental periapical film and standard dental X-ray machine, to compare the developmental stages of the mandibular canine with that of developmental stages of modified MP3 and to find out if any correlation exists, to determine if the developmental stages of the mandibular canine alone can be used as a reliable indicator for assessment of skeletal maturity. Materials and Methods: A total of 160 periapical radiographs, of the mandibular right canine and the MP3 region was taken and assessed according to the Dermirjian's stages of dental calcification and the modified MP3 stages. Results and Discussion: The correlation coefficient between MP3 stages and developmental stages of mandibular canine was found to be significant in both male and female groups. When the canine calcification stages were compared with the MP3 stages it was found that with the exception of the D stage of canine calcification the remaining stages showed a very high correlation with the modified MP3 stages. Conclusion: The correlation between the mandibular canine calcification stages, and the MP3 stages was found to be significant. The canine calcification could be used as a sole indicator for assessment of skeletal maturity. PMID:25210386
A Multi-Objective Partition Method for Marine Sensor Networks Based on Degree of Event Correlation.
Huang, Dongmei; Xu, Chenyixuan; Zhao, Danfeng; Song, Wei; He, Qi
2017-09-21
Existing marine sensor networks acquire data from sea areas that are geographically divided, and store the data independently in their affiliated sea area data centers. In the case of marine events across multiple sea areas, the current network structure needs to retrieve data from multiple data centers, and thus severely affects real-time decision making. In this study, in order to provide a fast data retrieval service for a marine sensor network, we use all the marine sensors as the vertices, establish the edge based on marine events, and abstract the marine sensor network as a graph. Then, we construct a multi-objective balanced partition method to partition the abstract graph into multiple regions and store them in the cloud computing platform. This method effectively increases the correlation of the sensors and decreases the retrieval cost. On this basis, an incremental optimization strategy is designed to dynamically optimize existing partitions when new sensors are added into the network. Experimental results show that the proposed method can achieve the optimal layout for distributed storage in the process of disaster data retrieval in the China Sea area, and effectively optimize the result of partitions when new buoys are deployed, which eventually will provide efficient data access service for marine events.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
Statistical physics in foreign exchange currency and stock markets
NASA Astrophysics Data System (ADS)
Ausloos, M.
2000-09-01
Problems in economy and finance have attracted the interest of statistical physicists all over the world. Fundamental problems pertain to the existence or not of long-, medium- or/and short-range power-law correlations in various economic systems, to the presence of financial cycles and on economic considerations, including economic policy. A method like the detrended fluctuation analysis is recalled emphasizing its value in sorting out correlation ranges, thereby leading to predictability at short horizon. The ( m, k)-Zipf method is presented for sorting out short-range correlations in the sign and amplitude of the fluctuations. A well-known financial analysis technique, the so-called moving average, is shown to raise questions to physicists about fractional Brownian motion properties. Among spectacular results, the possibility of crash predictions has been demonstrated through the log-periodicity of financial index oscillations.
A Method for Correlation of Gravestone Weathering and Air Quality (SO2), West Amidlands, UK
NASA Astrophysics Data System (ADS)
Carlson, Michael John
From the beginning of the Industrial Revolution through the environmental revolution of the 1970s Britain suffered the effects of poor air quality primarily from particulate matter and acid in the form of NOx and SO x compounds. Air quality stations across the region recorded SO 2 beginning in the 1960s however the direct measurement of air quality prior to 1960 is lacking and only anecdotal notations exist. Proxy records including lung tissue samples, particulates in sediments cores, lake acidification studies and gravestone weathering have all been used to reconstruct the history of air quality. A 120-year record of acid deposition reconstructed from lead-lettered marble gravestone weathering combined with SO2 measurements from the air monitoring network across the West Midlands, UK region beginning in the 1960s form the framework for this study. The study seeks to create a spatial and temporal correlation between the gravestone weathering and measured SO 2. Successful correlation of the dataset from 1960s to the 2000s would allow a paleo-air quality record to be generated from the 120-year record of gravestone weathering. Decadal gravestone weathering rates can be estimated by non-linear regression analysis of stone loss at individual cemeteries. Gravestone weathering rates are interpolated across the region through Empirical Bayesian Kriging (EBK) methods performed through ArcGISRTM and through a land use based approach based on digitized maps of land use. Both methods of interpolation allow for the direct correlation of gravestone weathering and measured SO2 to be made. Decadal scale correlations of gravestone weathering rates and measured SO2 are very weak and non-existent for both EBK and the land use based approach. Decadal results combined together on a larger scale for each respective method display a better visual correlation. However, the relative clustering of data at lower SO2 concentrations and the lack of data at higher SO2 concentrations make the confidence in the correlations made too weak to rely on. The relation between surrounding land use and gravestone weathering rates was very strong for the 1960s-1980s with diminishing correlations approaching the 2000s. Gravestone weathering of cemeteries is highly influenced by the amount of industrial sources of pollution within a 7km radius. Reduced correlation of land use and weathering beyond the 1980s is solid grounds for the success of environmental regulation and control put in place across the UK during later parts of the 20th century.
Protein structure recognition: From eigenvector analysis to structural threading method
NASA Astrophysics Data System (ADS)
Cao, Haibo
In this work, we try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. We found a strong correlation between amino acid sequence and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, we give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part include discussions of interactions among amino acids residues, lattice HP model, and the designablity principle. In the second part, we try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in our eigenvector study of protein contact matrix. We believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, we discuss a threading method based on the correlation between amino acid sequence and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, we list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches.
Rare Variant Association Test with Multiple Phenotypes
Lee, Selyeong; Won, Sungho; Kim, Young Jin; Kim, Yongkang; Kim, Bong-Jo; Park, Taesung
2016-01-01
Although genome-wide association studies (GWAS) have now discovered thousands of genetic variants associated with common traits, such variants cannot explain the large degree of “missing heritability,” likely due to rare variants. The advent of next generation sequencing technology has allowed rare variant detection and association with common traits, often by investigating specific genomic regions for rare variant effects on a trait. Although multiply correlated phenotypes are often concurrently observed in GWAS, most studies analyze only single phenotypes, which may lessen statistical power. To increase power, multivariate analyses, which consider correlations between multiple phenotypes, can be used. However, few existing multi-variant analyses can identify rare variants for assessing multiple phenotypes. Here, we propose Multivariate Association Analysis using Score Statistics (MAAUSS), to identify rare variants associated with multiple phenotypes, based on the widely used Sequence Kernel Association Test (SKAT) for a single phenotype. We applied MAAUSS to Whole Exome Sequencing (WES) data from a Korean population of 1,058 subjects, to discover genes associated with multiple traits of liver function. We then assessed validation of those genes by a replication study, using an independent dataset of 3,445 individuals. Notably, we detected the gene ZNF620 among five significant genes. We then performed a simulation study to compare MAAUSS's performance with existing methods. Overall, MAAUSS successfully conserved type 1 error rates and in many cases, had a higher power than the existing methods. This study illustrates a feasible and straightforward approach for identifying rare variants correlated with multiple phenotypes, with likely relevance to missing heritability. PMID:28039885
NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo
We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.
Recommended Practice for Securing Control System Modems
DOE Office of Scientific and Technical Information (OSTI.GOV)
James R. Davidson; Jason L. Wright
2008-01-01
This paper addresses an often overlooked “backdoor” into critical infrastructure control systems created by modem connections. A modem’s connection to the public telephone system is similar to a corporate network connection to the Internet. By tracing typical attack paths into the system, this paper provides the reader with an analysis of the problem and then guides the reader through methods to evaluate existing modem security. Following the analysis, a series of methods for securing modems is provided. These methods are correlated to well-known networking security methods.
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Zhang, Juping; Yang, Chan; Jin, Zhen; Li, Jia
2018-07-14
In this paper, the correlation coefficients between nodes in states are used as dynamic variables, and we construct SIR epidemic dynamic models with correlation coefficients by using the pair approximation method in static networks and dynamic networks, respectively. Considering the clustering coefficient of the network, we analytically investigate the existence and the local asymptotic stability of each equilibrium of these models and derive threshold values for the prevalence of diseases. Additionally, we obtain two equivalent epidemic thresholds in dynamic networks, which are compared with the results of the mean field equations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kim, Seung Hyup
2008-01-01
Objective To evaluate the correlations between prostate volumes estimated by transabdominal, transrectal, and three-dimensional US and the factors affecting the differences. Materials and Methods The prostate volumes of 94 consecutive patients were measured by both transabdominal and transrectal US. Next, the prostate volumes of 58 other patients was measured by both transrectal and three-dimensional US. We evaluated the degree of correlation and mean difference in each comparison. We also analyzed possible factors affecting the differences, such as the experiences of examiners in transrectal US, bladder volume, and prostate volume. Results In the comparison of transabdominal and transrectal US methods, the mean difference was 8.4 ± 10.5 mL and correlation coefficient (r) was 0.775 (p < 0.01). The experienced examiner for the transrectal US method had the highest correlation (r = 0.967) and the significantly smallest difference (5.4 ± 3.9 mL) compared to the other examiners (the beginner and the trained; p < 0.05). Prostate volume measured by transrectal US showed a weak correlation with the difference (r = 0.360, p < 0.05). Bladder volume did not show significant correlation with the difference (r = -0.043, p > 0.05). The comparison between the transrectal and three-dimensional US methods revealed a mean difference of 3.7 ± 3.4 mL and the correlation coefficient was 0.924 for the experienced examiner. Furthermore, no significant difference existed between examiners (p > 0.05). Prostate volume measured by transrectal US showed a positive correlation with the difference for the beginner only (r = 0.405, p < 0.05). Conclusion In the prostate volume estimation by US, experience in transrectal US is important in the correlation with transabdominal US, but not with three-dimensional US. Also, less experienced examiners' assessment of the prostate volume can be affected by prostate volume itself. PMID:18385560
Dhikav, Vikas; Duraiswamy, Sharmila; Anand, Kuljeet Singh
2017-01-01
Hippocampus undergoes atrophy in patients with Alzheimer's disease (AD). Calculation of hippocampal volumes can be done by a variety of methods using T1-weighted images of magnetic resonance imaging (MRI) of the brain. Medial temporal lobes atrophy (MTL) can be rated visually using T1-weighted MRI brain images. The present study was done to see if any correlation existed between hippocampal volumes and visual rating scores of the MTL using Scheltens Visual Rating Method. We screened 84 subjects presented to the Department of Neurology of a Tertiary Care Hospital and enrolled forty subjects meeting the National Institute of Neurological and Communicative Disorders and Stroke, AD related Disease Association criteria. Selected patients underwent MRI brain and T1-weighted images in a plane perpendicular to long axis of hippocampus were obtained. Hippocampal volumes were calculated manually using a standard protocol. The calculated hippocampal volumes were correlated with Scheltens Visual Rating Method for Rating MTL. A total of 32 cognitively normal age-matched subjects were selected to see the same correlation in the healthy subjects as well. Sensitivity and specificity of both methods was calculated and compared. There was an insignificant correlation between the hippocampal volumes and MTL rating scores in cognitively normal elderly ( n = 32; Pearson Correlation coefficient = 0.16, P > 0.05). In the AD Group, there was a moderately strong correlation between measured hippocampal volumes and MTL Rating (Pearson's correlation coefficient = -0.54; P < 0.05. There was a moderately strong correlation between hippocampal volume and Mini-Mental Status Examination in the AD group. Manual delineation was superior compared to the visual method ( P < 0.05). Good correlation was present between manual hippocampal volume measurements and MTL scores. Sensitivity and specificity of manual measurement of hippocampus was higher compared to visual rating scores for MTL in patients with AD.
Analysis of noise-induced temporal correlations in neuronal spike sequences
NASA Astrophysics Data System (ADS)
Reinoso, José A.; Torrent, M. C.; Masoller, Cristina
2016-11-01
We investigate temporal correlations in sequences of noise-induced neuronal spikes, using a symbolic method of time-series analysis. We focus on the sequence of time-intervals between consecutive spikes (inter-spike-intervals, ISIs). The analysis method, known as ordinal analysis, transforms the ISI sequence into a sequence of ordinal patterns (OPs), which are defined in terms of the relative ordering of consecutive ISIs. The ISI sequences are obtained from extensive simulations of two neuron models (FitzHugh-Nagumo, FHN, and integrate-and-fire, IF), with correlated noise. We find that, as the noise strength increases, temporal order gradually emerges, revealed by the existence of more frequent ordinal patterns in the ISI sequence. While in the FHN model the most frequent OP depends on the noise strength, in the IF model it is independent of the noise strength. In both models, the correlation time of the noise affects the OP probabilities but does not modify the most probable pattern.
Tang, Rongnian; Chen, Xupeng; Li, Chuang
2018-05-01
Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.
2013-09-01
existing MR scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently...and unique system for analysis of affected brain regions and coupled with other imaging techniques and molecular measurements holds significant...scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently stain
Segmentation of time series with long-range fractal correlations
Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.
2012-01-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997
Kim, Min Kyung; Lane, Anatoliy; Kelley, James J; Lun, Desmond S
2016-01-01
Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open-source package MOST (http://most.ccib.rutgers.edu/). Our method represents a significant advance over existing methods for inferring intracellular metabolic flux from transcriptomic data. It not only achieves higher accuracy, but it also combines into a single method a number of other desirable characteristics including applicability to a wide range of experimental conditions, production of a unique solution, fast running time, and the availability of a user-friendly implementation.
Pearson, Clare; Verne, Julia; Wells, Claudia; Polato, Giovanna M; Higginson, Irene J; Gao, Wei
2017-01-26
Geographical accessibility is important in accessing healthcare services. Measuring it has evolved alongside technological and data analysis advances. High correlations between different methods have been detected, but no comparisons exist in the context of palliative and end of life care (PEoLC) studies. To assess how geographical accessibility can affect PEoLC, selection of an appropriate method to capture it is crucial. We therefore aimed to compare methods of measuring geographical accessibility of decedents to PEoLC-related facilities in South London, an area with well-developed SPC provision. Individual-level death registration data in 2012 (n = 18,165), from the Office for National Statistics (ONS) were linked to area-level PEoLC-related facilities from various sources. Simple and more complex measures of geographical accessibility were calculated using the residential postcodes of the decedents and postcodes of the nearest hospital, care home and hospice. Distance measures (straight-line, travel network) and travel times along the road network were compared using geographic information system (GIS) mapping and correlation analysis (Spearman rho). Borough-level maps demonstrate similarities in geographical accessibility measures. Strong positive correlation exist between straight-line and travel distances to the nearest hospital (rho = 0.97), care home (rho = 0.94) and hospice (rho = 0.99). Travel times were also highly correlated with distance measures to the nearest hospital (rho range = 0.84-0.88), care home (rho = 0.88-0.95) and hospice (rho = 0.93-0.95). All correlations were significant at p < 0.001 level. Distance-based and travel-time measures of geographical accessibility to PEoLC-related facilities in South London are similar, suggesting the choice of measure can be based on the ease of calculation.
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Quantifying Nanoscale Order in Amorphous Materials via Fluctuation Electron Microscopy
ERIC Educational Resources Information Center
Bogle, Stephanie Nicole
2009-01-01
Fluctuation electron microscopy (FEM) has been used to study the nanoscale order in various amorphous materials. The method is explicitly sensitive to 3- and 4-body atomic correlation functions in amorphous materials; this is sufficient to establish the existence of structural order on the nanoscale, even when the radial distribution function…
Social, Attitudinal, and Demographic Correlates of Adolescent vs College-Age Tobacco Use Initiation
ERIC Educational Resources Information Center
Stockdale, Margaret S.; Dawson-Owens, Hayley L.; Sagrestano, Lynda M.
2005-01-01
Objective: To examine associations between social influences and smoking-related attitudes and age of cigarette use initiation among college students. Methods: Responses from 3 campus surveys (2 random, 1 convenience) were analyzed. The surveys were modeled from existing state or national tobacco surveys and other psychometrically valid surveys.…
Hu, Yi; Cheng, Xuanhong; Daniel Ou-Yang, H
2013-01-01
Fluorescence correlation spectroscopy (FCS) is one of the most sensitive methods for enumerating low concentration nanoparticles in a suspension. However, biological nanoparticles such as viruses often exist at a concentration much lower than the FCS detection limit. While optically generated trapping potentials are shown to effectively enhance the concentration of nanoparticles, feasibility of FCS for enumerating field-enriched nanoparticles requires understanding of the nanoparticle behavior in the external field. This paper reports an experimental study that combines optical trapping and FCS to examine existing theoretical predictions of particle concentration. Colloidal suspensions of polystyrene (PS) nanospheres and HIV-1 virus-like particles are used as model systems. Optical trapping energies and statistical analysis are used to discuss the applicability of FCS for enumerating nanoparticles in a potential well produced by a force field.
Filatov, Michael; Liu, Fang; Kim, Kwang S.; ...
2016-12-22
Here, the spin-restricted ensemble-referenced Kohn-Sham (REKS) method is based on an ensemble representation of the density and is capable of correctly describing the non-dynamic electron correlation stemming from (near-)degeneracy of several electronic configurations. The existing REKS methodology describes systems with two electrons in two fractionally occupied orbitals. In this work, the REKS methodology is extended to treat systems with four fractionally occupied orbitals accommodating four electrons and self-consistent implementation of the REKS(4,4) method with simultaneous optimization of the orbitals and their fractional occupation numbers is reported. The new method is applied to a number of molecular systems where simultaneous dissociationmore » of several chemical bonds takes place, as well as to the singlet ground states of organic tetraradicals 2,4-didehydrometaxylylene and 1,4,6,9-spiro[4.4]nonatetrayl.« less
Badea, Radu; Zaro, Răzvan; Tanțău, Marcel; Chiorean, Liliana
2015-09-01
Ultrasonography is generally accepted and performed as a first choice imaging technique in patients with jaundice. The method allows the discrimination between cholestatic and mechanical jaundice. The existing procedures are multiple: gray scale, Doppler, i.v. contrast enhancement, elastography, tridimensional ultrasonography, each of these with different contribution to the positive and differential diagnosis regarding the nature of the jaundice. The final diagnosis is a multimodal one and the efficiency is dependent on the level of the available technology, the examiner's experience, the degree and modality of integration of the data within the clinical context, as well as on the portfolio of available imaging procedures. This review shows the main ultrasonographic methods consecrated in the evaluation of the biliary tree. It also underlines the integrated character of the procedures, as well as the necessity to correlate with other imaging methods and the clinical situation.
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
Spatially Regularized Machine Learning for Task and Resting-state fMRI
Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei
2015-01-01
Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627
Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft
NASA Technical Reports Server (NTRS)
Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.
1987-01-01
Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.
NASA Astrophysics Data System (ADS)
Ruan, Qingsong; Zhang, Shuhua; Lv, Dayong; Lu, Xinsheng
2018-02-01
Based on the implementation of Shanghai-Hong Kong Stock Connect in China, this paper examines the effects of financial liberalization on stock market comovement using both multifractal detrended fluctuation analysis (MF-DFA) and multifractal detrended cross-correlation analysis (MF-DCCA) methods. Results based on MF-DFA confirm the multifractality of Shanghai and Hong Kong stock markets, and the market efficiency of Shanghai stock market increased after the implementation of this connect program. Besides, analysis based on MF-DCCA has verified the existence of persistent cross-correlation between Shanghai and Hong Kong stock markets, and the cross-correlation gets stronger after the launch of this liberalization program. Finally, we find that fat-tail distribution is the main source of multifractality in the cross-correlations before the stock connect program, while long-range correlation contributes to the multifractality after this program.
Memory and long-range correlations in chess games
NASA Astrophysics Data System (ADS)
Schaigorodsky, Ana L.; Perotti, Juan I.; Billoni, Orlando V.
2014-01-01
In this paper we report the existence of long-range memory in the opening moves of a chronologically ordered set of chess games using an extensive chess database. We used two mapping rules to build discrete time series and analyzed them using two methods for detecting long-range correlations; rescaled range analysis and detrended fluctuation analysis. We found that long-range memory is related to the level of the players. When the database is filtered according to player levels we found differences in the persistence of the different subsets. For high level players, correlations are stronger at long time scales; whereas in intermediate and low level players they reach the maximum value at shorter time scales. This can be interpreted as a signature of the different strategies used by players with different levels of expertise. These results are robust against the assignation rules and the method employed in the analysis of the time series.
Local-feature analysis for automated coarse-graining of bulk-polymer molecular dynamics simulations.
Xue, Y; Ludovice, P J; Grover, M A
2012-12-01
A method for automated coarse-graining of bulk polymers is presented, using the data-mining tool of local feature analysis. Most existing methods for polymer coarse-graining define superatoms based on their covalent bonding topology along the polymer backbone, but here superatoms are defined based only on their correlated motions, as observed in molecular dynamics simulations. Correlated atomic motions are identified in the simulation data using local feature analysis, between atoms in the same or in different polymer chains. Groups of highly correlated atoms constitute the superatoms in the coarse-graining scheme, and the positions of their seed coordinates are then projected forward in time. Based on only the seed positions, local feature analysis enables the full reconstruction of all atomic positions. This reconstruction suggests an iterative scheme to reduce the computation of the simulations to initialize another short molecular dynamic simulation, identify new superatoms, and again project forward in time.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Population coding and decoding in a neural field: a computational study.
Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki
2002-05-01
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.
DISSCO: direct imputation of summary statistics allowing covariates.
Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun
2015-08-01
Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9-15.2% for variants with minor allele frequency <5%. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh
2017-01-01
The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.
Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1996-01-01
In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.
Optical long baseline intensity interferometry: prospects for stellar physics
NASA Astrophysics Data System (ADS)
Rivet, Jean-Pierre; Vakili, Farrokh; Lai, Olivier; Vernet, David; Fouché, Mathilde; Guerin, William; Labeyrie, Guillaume; Kaiser, Robin
2018-06-01
More than sixty years after the first intensity correlation experiments by Hanbury Brown and Twiss, there is renewed interest for intensity interferometry techniques for high angular resolution studies of celestial sources. We report on a successful attempt to measure the bunching peak in the intensity correlation function for bright stellar sources with 1 meter telescopes (I2C project). We propose further improvements of our preliminary experiments of spatial interferometry between two 1 m telescopes, and discuss the possibility to export our method to existing large arrays of telescopes.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Cai, Qing-yu; You, Li; Zhan, Ming-sheng
2009-05-01
Using standard statistical method, we discover the existence of correlations among Hawking radiations (of tunneled particles) from a black hole. The information carried by such correlations is quantified by mutual information between sequential emissions. Through a careful counting of the entropy taken out by the emitted particles, we show that the black hole radiation as tunneling is an entropy conservation process. While information is leaked out through the radiation, the total entropy is conserved. Thus, we conclude the black hole evaporation process is unitary.
NASA Astrophysics Data System (ADS)
Valkov, V. V.; Dzebisashvili, D. M.; Barabanov, A. F.
2017-05-01
The spin-fermion model, which is an effective low-energy realization of the three-band Emery model after passing to the Wannier representation for the px and py orbitals of the subsystem of oxygen ions, reduces to the generalized Kondo lattice model. A specific feature of this model is the existence of spin-correlated hoppings of the current carriers between distant cells. Numerical calculations of the spectrum of spin-electron excitations highlight the important role of the long-range spin-correlated hoppings.
Masud, Mohammad Shahed; Borisyuk, Roman; Stuart, Liz
2017-07-15
This study analyses multiple spike trains (MST) data, defines its functional connectivity and subsequently visualises an accurate diagram of connections. This is a challenging problem. For example, it is difficult to distinguish the common input and the direct functional connection of two spike trains. The new method presented in this paper is based on the traditional pairwise cross-correlation function (CCF) and a new combination of statistical techniques. First, the CCF is used to create the Advanced Correlation Grid (ACG) correlation where both the significant peak of the CCF and the corresponding time delay are used for detailed analysis of connectivity. Second, these two features of functional connectivity are used to classify connections. Finally, the visualization technique is used to represent the topology of functional connections. Examples are presented in the paper to demonstrate the new Advanced Correlation Grid method and to show how it enables discrimination between (i) influence from one spike train to another through an intermediate spike train and (ii) influence from one common spike train to another pair of analysed spike trains. The ACG method enables scientists to automatically distinguish between direct connections from spurious connections such as common source connection and indirect connection whereas existing methods require in-depth analysis to identify such connections. The ACG is a new and effective method for studying functional connectivity of multiple spike trains. This method can identify accurately all the direct connections and can distinguish common source and indirect connections automatically. Copyright © 2017 Elsevier B.V. All rights reserved.
Duncan, Candice M; Brusseau, Mark L
2018-03-01
The majority of prior phytoscreening applications have employed the method as a tool to qualitatively determine the presence of contamination in the subsurface. Although qualitative data is quite useful, this study explores the potential for using phytoscreening quantitatively. The existence of site-specific and non-site-specific (master) correlations between VOC concentrations in tree tissue and groundwater is investigated using data collected from several phytoscreening studies. The aggregated data comprise 100 measurements collected from 12 sites that span a wide range of site conditions. Significant site-specific correlations are observed between tetrachloroethene (PCE) and trichloroethene (TCE) concentrations measured for tree tissue and those measured in groundwater for three sites. A moderately significant correlation (r 2 =0.56) exists for the entire aggregate data set. Parsing the data by groundwater depth produced a highly significant correlation (r 2 =0.88) for sites with shallow (<4m) groundwater. Such a significant correlation for data collected by different investigators from multiple sites with a wide range of tree species and subsurface conditions indicates that groundwater concentration is the predominant factor mediating tree-tissue concentrations for these sites. This may be a result of trees likely directly tapping groundwater for these shallow groundwater conditions. This master correlation may provide reasonable order-of-magnitude estimates of VOC concentrations in groundwater for such sites, thereby allowing the use of phytoscreening in a more quantitative mode. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pal, Mayukha; Madhusudana Rao, P.; Manimaran, P.
2014-12-01
We apply the recently developed multifractal detrended cross-correlation analysis method to investigate the cross-correlation behavior and fractal nature between two non-stationary time series. We analyze the daily return price of gold, West Texas Intermediate and Brent crude oil, foreign exchange rate data, over a period of 18 years. The cross correlation has been measured from the Hurst scaling exponents and the singularity spectrum quantitatively. From the results, the existence of multifractal cross-correlation between all of these time series is found. We also found that the cross correlation between gold and oil prices possess uncorrelated behavior and the remaining bivariate time series possess persistent behavior. It was observed for five bivariate series that the cross-correlation exponents are less than the calculated average generalized Hurst exponents (GHE) for q<0 and greater than GHE when q>0 and for one bivariate series the cross-correlation exponent is greater than GHE for all q values.
Helsel, D.R.
2006-01-01
The most commonly used method in environmental chemistry to deal with values below detection limits is to substitute a fraction of the detection limit for each nondetect. Two decades of research has shown that this fabrication of values produces poor estimates of statistics, and commonly obscures patterns and trends in the data. Papers using substitution may conclude that significant differences, correlations, and regression relationships do not exist, when in fact they do. The reverse may also be true. Fortunately, good alternative methods for dealing with nondetects already exist, and are summarized here with references to original sources. Substituting values for nondetects should be used rarely, and should generally be considered unacceptable in scientific research. There are better ways.
The correlation structure of several popular pseudorandom number generators
NASA Technical Reports Server (NTRS)
Neuman, F.; Merrick, R.; Martin, C. F.
1973-01-01
One of the desirable properties of a pseudorandom number generator is that the sequence of numbers it generates should have very low autocorrelation for all shifts except for zero shift and those that are multiples of its cycle length. Due to the simple methods of constructing random numbers, the ideal is often not quite fulfilled. A simple method of examining any random generator for previously unsuspected regularities is discussed. Once they are discovered it is often easy to derive the mathematical relationships, which describe the mathematical relationships, which describe the regular behavior. As examples, it is shown that high correlation exists in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle lengths.
Quantum correlation exists in any non-product state
Guo, Yu; Wu, Shengjun
2014-01-01
Simultaneous existence of correlation in complementary bases is a fundamental feature of quantum correlation, and we show that this characteristic is present in any non-product bipartite state. We propose a measure via mutually unbiased bases to study this feature of quantum correlation, and compare it with other measures of quantum correlation for several families of bipartite states. PMID:25434458
Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli
Schmeltzer, Christian; Kihara, Alexandre Hiroaki; Sokolov, Igor Michailovitsch; Rüdiger, Sten
2015-01-01
Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information. PMID:26115374
Spatial correlation of auroral zone geomagnetic variations
NASA Astrophysics Data System (ADS)
Jackel, B. J.; Davalos, A.
2016-12-01
Magnetic field perturbations in the auroral zone are produced by a combination of distant ionospheric and local ground induced currents. Spatial and temporal structure of these currents is scientifically interesting and can also have a significant influence on critical infrastructure.Ground-based magnetometer networks are an essential tool for studying these phenomena, with the existing complement of instruments in Canada providing extended local time coverage. In this study we examine the spatial correlation between magnetic field observations over a range of scale lengths. Principal component and canonical correlation analysis are used to quantify relationships between multiple sites. Results could be used to optimize network configurations, validate computational models, and improve methods for empirical interpolation.
[The problem of economic and population development (author's transl)].
Maier, W
1977-04-01
The question is raised whether the apparent complexity and differentiation excludes chance. Generally a chance occurrence is without prior determination. The laws established by economics and demography with mathematical methods are therefore not capable to determine all facts because of their hypothetical arrangement. Reality can only be expressed in a categorical judgment. It is possible to arrive from hypothetical judgments--when there is A so there is B to an existence of a categorical B. Any theory of economic balance and any calculation in a demographic model to arrive at a balanced population development are methods which correlate facts with facts and allow conclusions when the correlation is actually existent. However, no conclusion as to the idea of determination of the generative behavior by economic production can be arrived at. Revorsely the conclusion that sociologie developments and generative behavior are completely independant of economic production is not possible. Hypothetical type of thinking in a model does not exclude chance and each fact has an element of chance in it. This appears to be a heuristic circle. Noncircular conclusions for the practice of life in order to advise, decide and act, do not exist without assumptions and without value judgments. In order to influence the development of the population continued scientific reflection, observation and argumentation is necessary. Correct technical functioning is translated into social realms. New reflections and interpretations are constantly required.
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1992-01-01
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
The Correlation between Global Citizenship Perceptions and Cultural Intelligence Levels of Teachers
ERIC Educational Resources Information Center
Yüksel, Azize; Eres, Figen
2018-01-01
The increase of communication methods in the globalized world, the reduction of locality to a minimum in the economy and as a result of this, the migration from less economically developed countries to developed countries which in turn results in close interaction between ethnicities, all make it impossible for a homogenous society to exist and…
ERIC Educational Resources Information Center
Samruayruen, Buncha
2013-01-01
The purpose of this study was to identify the existing level of self-regulated learning (SRL) among Thai online learners, to examine the relationship between SRL and academic achievement based on a) course completion and b) course grades, and to investigate differences in SRL as they correlate to demographic factors. A mixed-methods research…
Flow Boiling Critical Heat Flux in Reduced Gravity
NASA Technical Reports Server (NTRS)
Mudawar, Issam; Zhang, Hui; Hasan, Mohammad M.
2004-01-01
This study provides systematic method for reducing power consumption in reduced gravity systems by adopting minimum velocity required to provide adequate CHF and preclude detrimental effects of reduced gravity . This study proves it is possible to use existing 1 ge flow boiling and CHF correlations and models to design reduced gravity systems provided minimum velocity criteria are met
3D Markov Process for Traffic Flow Prediction in Real-Time.
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-25
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.
3D Markov Process for Traffic Flow Prediction in Real-Time
Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi
2016-01-01
Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025
Carbon isotope ratios and isotopic correlations between components in fruit juices
NASA Astrophysics Data System (ADS)
Wierzchnicki, Ryszard
2013-04-01
Nowadays food products are defined by geographical origin, method of production and by some regulations concerning terms of their authenticity. Important data for confirm the authenticity of product are providing by isotopic methods of food control. The method checks crucial criteria which characterize the authenticity of inspected product. The European Union Regulations clearly show the tendency for application of the isotopic methods for food authenticity control (wine, honey, juice). The aim of the legislation steps is the protection of European market from possibility of the commercial frauds. Method of isotope ratio mass spectrometry is very effective tool for the use distinguishably the food products of various geographical origin. The basic problem for identification of the sample origin is the lack of databases of isotopic composition of components and information about the correlations of the data. The subject of the work was study the isotopic correlations existing between components of fruits. The chemical and instrumental methods of separation: water, sugars, organic acids and pulp from fruit were implemented. IRMS technique was used to measure isotopic composition of samples. The final results for original samples of fruits (apple, strawberry etc.) will be presented and discussed. Acknowledgement: This work was supported by the Polish Ministry of Science and Higher Education under grant NR12-0043-10/2010.
Hua, Rui; Sun, Su-Qin; Zhou, Qun; Noda, Isao; Wang, Bao-Qin
2003-09-19
Fritillaria is a traditional Chinese herbal medicine for eliminating phlegm and relieving a cough with a long history in China and some other Asian countries. The objective of this study is to develop a nondestructive and accurate method to discriminate Fritillaria of different geographical origins, which is a troublesome work by existing analytical methods. We conducted a systematic study on five kinds of Fritillaria by Fourier transform infrared spectroscopy, second derivative infrared spectroscopy, and two-dimensional (2D) correlation infrared spectroscopy under thermal perturbation. Because Fritillaria consist of a large amount of starch, the conventional IR spectra of different Fritillaria only have very limited spectral feature differences. Based on these differences, we can separate different Fritillaria to a limited extent, but this method was deemed not very practical. The second derivative IR spectra of Fritillaria could enhance spectrum resolution, amplify the differences between the IR spectra of different Fritillaria, and provide some dissimilarity in their starch content, when compared with the spectrum of pure starch. Finally, we applied thermal perturbation to Fritillaria and analyzed the resulting spectra by the 2D correlation method to distinguish different Fritillaria easily and clearly. The distinction of very similar Fritillaria was possible because the spectral resolution was greatly enhanced by the 2D correlation spectroscopy. In addition, with the dynamic information of molecular structure provided by 2D correlation IR spectra, we studied the differences in the stability of active components of Fritillaria. The differences embodied mainly on the intensity ratio of the auto-peak at 985 cm(-1) and other auto-peaks. The 2D correlation IR spectroscopy (2D IR) of Fritillaria can be a new and powerful method to discriminate Fritillaria.
Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao
2015-09-01
The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.
THE RELATIONSHIP BETWEEN VARIOUS MODES OF SINGLE LEG POSTURAL CONTROL ASSESSMENT
Schmitz, Randy
2012-01-01
Purpose/Background: While various techniques have been developed to assess the postural control system, little is known about the relationship between single leg static and functional balance. The purpose of the current study was to determine the relationship between the performance measures of several single leg postural stability tests. Methods: Forty six recreationally active college students (17 males, 29 females, 21±3 yrs, 173±10 cm) performed six single leg tests in a counterbalanced order: 1) Firm Surface-Eyes Open, 2) Firm Surface-Eyes Closed, 3) Multiaxial Surface-Eyes Open, 4) Multiaxial Surface-Eyes Closed, 5) Star Excursion Balance Test (posterior medial reach), 6) Single leg Hop-Stabilization Test. Bivariate correlations were conducted between the six outcome variables. Results: Mild to moderate correlations existed between the static tests. No significant correlations existed involving either of the functional tests. Conclusions: The results indicate that while performance of static balance tasks are mildly to moderately related, they appear to be unrelated to functional reaching or hopping movements, supporting the utilization of a battery of tests to determine overall postural control performance. Level of Evidence: 3b PMID:22666640
An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.
Kim, Junghi; Bai, Yun; Pan, Wei
2015-12-01
We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods. © 2015 WILEY PERIODICALS, INC.
Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H
2017-07-01
Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.
Wiener Chaos and Nonlinear Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lototsky, S.V.
2006-11-15
The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Hu, Hongxing; Alsron, Bahargul; Xu, Bin; Hao, Wei
2016-12-25
The Cognitive Emotion Regulation Questionnaire (CERQ) is a cognitive and emotional tool measuring how individuals deal with stressful life events. However differences exist in the results of CERQ among individuals. This study was conducted to investigate the CERQ results and depressive symptoms of students at our university (both local and international students) in order to provide further guidance for psychological interventions. 255 sophomore and junior international students (171 male and 84 female) and 262 sophomore and junior Chinese students (124 male and 138 female) were investigated using CERQ, ASLEC and SDS questionnaires. Results were analyzed using SPSS 16.0. Compared to Chinese students, international students more often used cognitive adjustment methods such as "positive refocusing","re-focus on planning" and "catastrophizing". In regression equations where depression symptoms were used as the dependent variable, "self-blaming" and "catastrophizing"positively contributed to depression symptoms in international students, while"acceptance" was negatively correlated with depression symptoms.In Chinese students, "life events score" and "catastrophizing"were positively correlated withdepression symptoms, while "positive re-evaluating" was negatively correlated with depression symptoms. Among students of different races, positive coping methods were negatively correlated with depression symptoms and could possibly prevent the occurrence of depression, while negative coping methods were positively correlated with depression.Encouraging students to use adaptive coping methods during psychological intervention is an effective way to adjust cognitions and behavior for depression prevention.
Modified schirmer test--a screening tool for xerostomia among subjects on antidepressants.
Kumar, Nerella Narendra; Panchaksharappa, Mamatha Gowda; Annigeri, Rajeshwari G
2014-08-01
The aim of the present study is to assess salivary flow rate in the subjects who were on antidepressant medications and its comparison with healthy controls and assessment of unstimulated salivary flow rate by modified Schirmer test (MST) and volumetric method (spitting method) for evaluation of xerostomia and whether any correlation exists between two methods. Thirty subjects who were on antidepressants were divided into two groups: tricyclic antidepressants (TCA) and selective sertonin reuptake inhibitors (SSRI) of 15 each, compared with 30 age and gender matched controls. Unstimulated salivary flow rate was measured by both MST and spitting method. The unstimulated salivary flow rate measured by MST at the end of 3rd minute was 13.7 ± 10.08, 19.86 ± 8.95 and 31.0 ± 5.4 mm and by spitting method was 0.12 ± 0.07, 0.19 ± 0.10 and 0.30 ± 0.75 ml/min in TCA, SSRI and controls respectively (p<0.001). The Pearson correlation coefficient of r=0.85 shows excellent correlation between the two screening tests. Sensitivity and Specificity of MST was 90.9% and 31.5%. Salivary flow rate was less in antidepressant subjects when compared to the healthy controls. Results of the present study showed an excellent correlation excellent correlation between the two screening tests which suggests that MST can be routinely used as chair-side screening tool to evaluate hyposalivation which is time saving, patient friendly and specific of salivary secretions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jain, Rahi; Venkatasubramanian, Padma
2014-01-01
Quality Ayurvedic herbal medicines are potential, low-cost solutions for addressing contemporary healthcare needs of both Indian and global community. Correlating Ayurvedic herbal preparations with modern processing principles (MPPs) can help develop new and use appropriate technology for scaling up production of the medicines, which is necessary to meet the growing demand. Understanding the fundamental Ayurvedic principles behind formulation and processing is also important for improving the dosage forms. Even though Ayurvedic industry has adopted technologies from food, chemical and pharmaceutical industries, there is no systematic study to correlate the traditional and modern processing methods. This study is an attempt to provide a possible correlation between the Ayurvedic processing methods and MPPs. A systematic literature review was performed to identify the Ayurvedic processing methods by collecting information from English editions of classical Ayurveda texts on medicine preparation methods. Correlation between traditional and MPPs was done based on the techniques used in Ayurvedic drug processing. It was observed that in Ayurvedic medicine preparations there were two major types of processes, namely extraction, and separation. Extraction uses membrane rupturing and solute diffusion principles, while separation uses volatility, adsorption, and size-exclusion principles. The study provides systematic documentation of methods used in Ayurveda for herbal drug preparation along with its interpretation in terms of MPPs. This is the first step which can enable improving or replacing traditional techniques. New technologies or use of existing technologies can be used to improve the dosage forms and scaling up while maintaining the Ayurvedic principles similar to traditional techniques.
HU, Hongxing; ALSRON, Bahargul; XU, Bin; HAO, Wei
2016-01-01
Background The Cognitive Emotion Regulation Questionnaire (CERQ) is a cognitive and emotional tool measuring how individuals deal with stressful life events. However differences exist in the results of CERQ among individuals. Objective This study was conducted to investigate the CERQ results and depressive symptoms of students at our university (both local and international students) in order to provide further guidance for psychological interventions. Methods 255 sophomore and junior international students (171 male and 84 female) and 262 sophomore and junior Chinese students (124 male and 138 female) were investigated using CERQ, ASLEC and SDS questionnaires. Results were analyzed using SPSS 16.0. Result Compared to Chinese students, international students more often used cognitive adjustment methods such as “positive refocusing”,“re-focus on planning” and “catastrophizing”. In regression equations where depression symptoms were used as the dependent variable, “self-blaming” and “catastrophizing”positively contributed to depression symptoms in international students, while“acceptance” was negatively correlated with depression symptoms.In Chinese students, “life events score” and “catastrophizing”were positively correlated withdepression symptoms, while “positive re-evaluating” was negatively correlated with depression symptoms. Conclusion Among students of different races, positive coping methods were negatively correlated with depression symptoms and could possibly prevent the occurrence of depression, while negative coping methods were positively correlated with depression.Encouraging students to use adaptive coping methods during psychological intervention is an effective way to adjust cognitions and behavior for depression prevention. PMID:28638209
Correlation among oxygen vacancies in bismuth titanate ferroelectric ceramics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Wei; Chen Kai; Yao Yangyang
2004-11-15
Pure Bi{sub 4}Ti{sub 3}O{sub 12} ceramics were prepared using the conventional solid-state reaction method and their dielectric properties were investigated. A dielectric loss peak with the relaxation-type characteristic was observed at about 370 K at 100 Hz frequency. This peak was confirmed to be associated with the migration of oxygen vacancies inside ceramics. The Cole-Cole fitting to this peak reveals a strong correlation among oxygen vacancies and this strong correlation is considered to commonly exist among oxygen vacancies in ferroelectrics. Therefore, the migration of oxygen vacancies in ferroelectric materials would demonstrate a collective behavior instead of an individual one duemore » to this strong correlation. Furthermore, this correlation is in proportion to the concentration and in inverse proportion to the activation energy of oxygen vacancies. These results could be helpful to the understanding of the fatigue mechanisms in ferroelectric materials.« less
Bruno, Thiers; Abrahão, Julia
2012-01-01
This study examines the actions taken by operators aimed at preventing and combating information security incidents at a banking organization. The work utilizes the theoretical framework of ergonomics and cognitive psychology. The method is workplace ergonomic analysis. Its focus is directed towards examining the cognitive dimension of the work environment with special attention to the occurrence of correlations between variability in incident frequency and the results of sign detection actions. It categorizes 45,142 operator decisions according to the theory of signal detection (Sternberg, 2000). It analyzes the correlation between incident proportions (indirectly associated with the cognitive efforts demanded from the operator) and operator decisions. The study demonstrated the existence of a positive correlation between incident proportions and false positive decisions (false alarms). However, this correlation could not be observed in relation to decisions of the false-negative type (incorrect rejection).
The Origin and Evolution of the Galaxy Star Formation Rate-Stellar Mass Correlation
NASA Astrophysics Data System (ADS)
Gawiser, Eric; Iyer, Kartheik
2018-01-01
The existence of a tight correlation between galaxies’ star formation rates and stellar masses is far more surprising than usually noted. However, a simple analytical calculation illustrates that the evolution of the normalization of this correlation is driven primarily by the inverse age of the universe, and that the underlying correlation is one between galaxies’ instantaneous star formation rates and their average star formation rates since the Big Bang.Our new Dense Basis method of SED fitting (Iyer & Gawiser 2017, ApJ 838, 127) allows star formation histories (SFHs) to be reconstructed, along with uncertainties, for >10,000 galaxies in the CANDELS and 3D-HST catalogs at 0.5
DGCA: A comprehensive R package for Differential Gene Correlation Analysis.
McKenzie, Andrew T; Katsyv, Igor; Song, Won-Min; Wang, Minghui; Zhang, Bin
2016-11-15
Dissecting the regulatory relationships between genes is a critical step towards building accurate predictive models of biological systems. A powerful approach towards this end is to systematically study the differences in correlation between gene pairs in more than one distinct condition. In this study we develop an R package, DGCA (for Differential Gene Correlation Analysis), which offers a suite of tools for computing and analyzing differential correlations between gene pairs across multiple conditions. To minimize parametric assumptions, DGCA computes empirical p-values via permutation testing. To understand differential correlations at a systems level, DGCA performs higher-order analyses such as measuring the average difference in correlation and multiscale clustering analysis of differential correlation networks. Through a simulation study, we show that the straightforward z-score based method that DGCA employs significantly outperforms the existing alternative methods for calculating differential correlation. Application of DGCA to the TCGA RNA-seq data in breast cancer not only identifies key changes in the regulatory relationships between TP53 and PTEN and their target genes in the presence of inactivating mutations, but also reveals an immune-related differential correlation module that is specific to triple negative breast cancer (TNBC). DGCA is an R package for systematically assessing the difference in gene-gene regulatory relationships under different conditions. This user-friendly, effective, and comprehensive software tool will greatly facilitate the application of differential correlation analysis in many biological studies and thus will help identification of novel signaling pathways, biomarkers, and targets in complex biological systems and diseases.
Physiologic measures of sexual function in women: a review.
Woodard, Terri L; Diamond, Michael P
2009-07-01
To review and describe physiologic measures of assessing sexual function in women. Literature review. Studies that use instruments designed to measure female sexual function. Women participating in studies of female sexual function. Various instruments that measure physiologic features of female sexual function. Appraisal of the various instruments, including their advantages and disadvantages. Many unique physiologic methods of evaluating female sexual function have been developed during the past four decades. Each method has its benefits and limitations. Many physiologic methods exist, but most are not well-validated. In addition there has been an inability to correlate most physiologic measures with subjective measures of sexual arousal. Furthermore, given the complex nature of the sexual response in women, physiologic measures should be considered in context of other data, including the history, physical examination, and validated questionnaires. Nonetheless, the existence of appropriate physiologic measures is vital to our understanding of female sexual function and dysfunction.
Johnson, Quentin R; Lindsay, Richard J; Shen, Tongye
2018-02-21
A computational method which extracts the dominant motions from an ensemble of biomolecular conformations via a correlation analysis of residue-residue contacts is presented. The algorithm first renders the structural information into contact matrices, then constructs the collective modes based on the correlated dynamics of a selected set of dynamic contacts. Associated programs can bridge the results for further visualization using graphics software. The aim of this method is to provide an analysis of conformations of biopolymers from the contact viewpoint. It may assist a systematical uncovering of conformational switching mechanisms existing in proteins and biopolymer systems in general by statistical analysis of simulation snapshots. In contrast to conventional correlation analyses of Cartesian coordinates (such as distance covariance analysis and Cartesian principal component analysis), this program also provides an alternative way to locate essential collective motions in general. Herein, we detail the algorithm in a stepwise manner and comment on the importance of the method as applied to decoding allosteric mechanisms. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Method of determining forest production from remotely sensed forest parameters
Corey, J.C.; Mackey, H.E. Jr.
1987-08-31
A method of determining forest production entirely from remotely sensed data in which remotely sensed multispectral scanner (MSS) data on forest 5 composition is combined with remotely sensed radar imaging data on forest stand biophysical parameters to provide a measure of forest production. A high correlation has been found to exist between the remotely sensed radar imaging data and on site measurements of biophysical 10 parameters such as stand height, diameter at breast height, total tree height, mean area per tree, and timber stand volume.
Bedrosian, P.A.; Maercklin, N.; Weckmann, U.; Bartov, Y.; Ryberg, T.; Ritter, O.
2007-01-01
Magnetotelluric and seismic methods provide complementary information about the resistivity and velocity structure of the subsurface on similar scales and resolutions. No global relation, however, exists between these parameters, and correlations are often valid for only a limited target area. Independently derived inverse models from these methods can be combined using a classification approach to map geologic structure. The method employed is based solely on the statistical correlation of physical properties in a joint parameter space and is independent of theoretical or empirical relations linking electrical and seismic parameters. Regions of high correlation (classes) between resistivity and velocity can in turn be mapped back and re-examined in depth section. The spatial distribution of these classes, and the boundaries between them, provide structural information not evident in the individual models. This method is applied to a 10 km long profile crossing the Dead Sea Transform in Jordan. Several prominent classes are identified with specific lithologies in accordance with local geology. An abrupt change in lithology across the fault, together with vertical uplift of the basement suggest the fault is sub-vertical within the upper crust. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
Friesner, Daniel L.
2012-01-01
Objective. To determine whether a correlation exists between third-year PharmD students’ perceived pharmacy knowledge and actual pharmacy knowledge as assessed by the Pharmacy Curricular Outcomes Assessment (PCOA). Methods. In 2010 and 2011, the PCOA was administered in a low-stakes environment to third-year pharmacy students at North Dakota State University College of Pharmacy, Nursing, and Allied Sciences (COPNAS). A survey instrument was also administered on which students self-assessed their perceived competencies in each of the core areas covered by the PCOA examination. Results. The pharmacy students rated their competencies slightly higher than average. Performance on the PCOA was similar to but slightly higher than national averages. Correlations between each of the 4 content areas (basic biomedical sciences, pharmaceutical sciences, social/administrative sciences, and clinical sciences) mirrored those reported nationally by the National Association of Boards of Pharmacy (NABP). Student performance on the basic biomedical sciences portion of the PCOA was significantly correlated with students’ perceived competencies in the biomedical sciences. No other correlations between actual and perceived competencies were significant. Conclusion. A lack of correlation exists between what students perceive they know and what they actually know in the areas of pharmaceutical science; social, behavioral, and administrative science; and clinical science. Therefore, additional standardized measures are needed to assess curricular effectiveness and provide comparisons among pharmacy programs. PMID:22611272
Mounet, Fabien; Moing, Annick; Garcia, Virginie; Petit, Johann; Maucourt, Michael; Deborde, Catherine; Bernillon, Stéphane; Le Gall, Gwénaëlle; Colquhoun, Ian; Defernez, Marianne; Giraudel, Jean-Luc; Rolin, Dominique; Rothan, Christophe; Lemaire-Chamley, Martine
2009-01-01
Variations in early fruit development and composition may have major impacts on the taste and the overall quality of ripe tomato (Solanum lycopersicum) fruit. To get insights into the networks involved in these coordinated processes and to identify key regulatory genes, we explored the transcriptional and metabolic changes in expanding tomato fruit tissues using multivariate analysis and gene-metabolite correlation networks. To this end, we demonstrated and took advantage of the existence of clear structural and compositional differences between expanding mesocarp and locular tissue during fruit development (12–35 d postanthesis). Transcriptome and metabolome analyses were carried out with tomato microarrays and analytical methods including proton nuclear magnetic resonance and liquid chromatography-mass spectrometry, respectively. Pairwise comparisons of metabolite contents and gene expression profiles detected up to 37 direct gene-metabolite correlations involving regulatory genes (e.g. the correlations between glutamine, bZIP, and MYB transcription factors). Correlation network analyses revealed the existence of major hub genes correlated with 10 or more regulatory transcripts and embedded in a large regulatory network. This approach proved to be a valuable strategy for identifying specific subsets of genes implicated in key processes of fruit development and metabolism, which are therefore potential targets for genetic improvement of tomato fruit quality. PMID:19144766
[Correlation between soil-transmitted nematode infections and children's growth].
Wang, Xiao-Bing; Wang, Guo-Fei; Zhang, Lin-Xiu; Luo, Ren-Fu; Wang, Ju-Jun; Medina, Alexis; Eggleston, Karen; Rozelle, Scott; Smith, Scott
2013-06-01
To understand the infection status of soil-transmitted nematodes in southwest China and the correlation between soil-transmitted nematode infections and children's growth. The prevalence of soil-transmitted nematode infections was determined by Kato-Katz technique, and in part of the children, the examination of Enterobius vermicularis eggs was performed by using the cellophane swab method. The influencing factors were surveyed by using a standardized questionnaire. The relationship between soil-transmitted nematode infections and children's growth was analyzed by the ordinary least square (OLS) method. A total of 1 707 children were examined, with a soil-transmitted nematode infection rate of 22.2%. The results of OLS analysis showed that there existed the negative correlation between soil-transmitted nematode infections and the indexes of children's growth including BMI, the weight-for-age Z score and height-for-age Z score. Furthermore, other correlated variables included the age, gender, educational level of mother and raising livestock and poultry, etc. Children' s retardation is still a serious issue in the southwest poor areas of China and correlated with the infections of soil-transmitted nematodes. For improving children's growth, it is greatly significant to enhance the deworming and health education about parasitic diseases in mothers.
Microstructural Effects on Initiation Behavior in HMX
NASA Astrophysics Data System (ADS)
Molek, Christopher; Welle, Eric; Hardin, Barrett; Vitarelli, Jim; Wixom, Ryan; Samuels, Philip
Understanding the role microstructure plays on ignition and growth behavior has been the subject of a significant body of research within the detonation physics community. The pursuit of this understanding is important because safety and performance characteristics have been shown to strongly correlate to particle morphology. Historical studies have often correlated bulk powder characteristics to the performance or safety characteristics of pressed materials. We believe that a clearer and more relevant correlation is made between the pressed microstructure and the observed detonation behavior. This type of assessment is possible, as techniques now exist for the quantification of the pressed microstructures. Our talk will report on experimental efforts that correlate directly measured microstructural characteristics to initiation threshold behavior of HMX based materials. The internal microstructures were revealed using an argon ion cross-sectioning technique. This technique enabled the quantification of density and interface area of the pores within the pressed bed using methods of stereology. These bed characteristics are compared to the initiation threshold behavior of three HMX based materials using an electric gun based test method. Finally, a comparison of experimental threshold data to supporting theoretical efforts will be made.
A new theoretical approach to analyze complex processes in cytoskeleton proteins.
Li, Xin; Kolomeisky, Anatoly B
2014-03-20
Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.
Structure and inference in annotated networks
Newman, M. E. J.; Clauset, Aaron
2016-01-01
For many networks of scientific interest we know both the connections of the network and information about the network nodes, such as the age or gender of individuals in a social network. Here we demonstrate how this ‘metadata' can be used to improve our understanding of network structure. We focus in particular on the problem of community detection in networks and develop a mathematically principled approach that combines a network and its metadata to detect communities more accurately than can be done with either alone. Crucially, the method does not assume that the metadata are correlated with the communities we are trying to find. Instead, the method learns whether a correlation exists and correctly uses or ignores the metadata depending on whether they contain useful information. We demonstrate our method on synthetic networks with known structure and on real-world networks, large and small, drawn from social, biological and technological domains. PMID:27306566
Structure and inference in annotated networks
NASA Astrophysics Data System (ADS)
Newman, M. E. J.; Clauset, Aaron
2016-06-01
For many networks of scientific interest we know both the connections of the network and information about the network nodes, such as the age or gender of individuals in a social network. Here we demonstrate how this `metadata' can be used to improve our understanding of network structure. We focus in particular on the problem of community detection in networks and develop a mathematically principled approach that combines a network and its metadata to detect communities more accurately than can be done with either alone. Crucially, the method does not assume that the metadata are correlated with the communities we are trying to find. Instead, the method learns whether a correlation exists and correctly uses or ignores the metadata depending on whether they contain useful information. We demonstrate our method on synthetic networks with known structure and on real-world networks, large and small, drawn from social, biological and technological domains.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
Constraining compensated isocurvature perturbations using the CMB
NASA Astrophysics Data System (ADS)
Smith, Tristan L.; Rhiannon Smith, Kyle Yee, Julian Munoz, Daniel Grin
2017-01-01
Compensated isocurvature perturbations (CIPs) are variations in the cosmic baryon fraction which leave the total non-relativistic matter (and radiation) density unchanged. They are predicted by models of inflation which involve more than one scalar field, such as the curvaton scenario. At linear order, they leave the CMB two-point correlation function nearly unchanged: this is why existing constraints to CIPs are so much more permissive than constraints to typical isocurvature perturbations. Recent work articulated an efficient way to calculate the second order CIP effects on the CMB two-point correlation. We have implemented this method in order to explore constraints to the CIP amplitude using current Planck temperature and polarization data. In addition, we have computed the contribution of CIPs to the CMB lensing estimator which provides us with a novel method to use CMB data to place constraints on CIPs. We find that Planck data places a constraint to the CIP amplitude which is competitive with other methods.
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2016-12-01
Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
Currency co-movement and network correlation structure of foreign exchange market
NASA Astrophysics Data System (ADS)
Mai, Yong; Chen, Huan; Zou, Jun-Zhong; Li, Sai-Ping
2018-02-01
We study the correlations of exchange rate volatility in the global foreign exchange(FX) market based on complex network graphs. Correlation matrices (CM) and the theoretical information flow method (Infomap) are employed to analyze the modular structure of the global foreign exchange network. The analysis demonstrates that there exist currency modules in the network, which is consistent with the geographical nature of currencies. The European and the East Asian currency modules in the FX network are most significant. We introduce a measure of the impact of individual currency based on its partial correlations with other currencies. We further incorporate an impact elimination method to filter out the impact of core nodes and construct subnetworks after the removal of these core nodes. The result reveals that (i) the US Dollar has prominent global influence on the FX market while the Euro has great impact on European currencies; (ii) the East Asian currency module is more strongly correlated than the European currency module. The strong correlation is a result of the strong co-movement of currencies in the region. The co-movement of currencies is further used to study the formation of international monetary bloc and the result is in good agreement with the consideration based on international trade.
Zhang, Qingyang
2018-05-16
Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.
Bleiziffer, Patrick; Schmidtel, Daniel; Görling, Andreas
2014-11-28
The occurrence of instabilities, in particular singlet-triplet and singlet-singlet instabilities, in the exact-exchange (EXX) Kohn-Sham method is investigated. Hessian matrices of the EXX electronic energy with respect to the expansion coefficients of the EXX effective Kohn-Sham potential in an auxiliary basis set are derived. The eigenvalues of these Hessian matrices determine whether or not instabilities are present. Similar as in the corresponding Hartree-Fock case instabilities in the EXX method are related to symmetry breaking of the Hamiltonian operator for the EXX orbitals. In the EXX methods symmetry breaking can easily be visualized by displaying the local multiplicative exchange potential. Examples (N2, O2, and the polyyne C10H2) for instabilities and symmetry breaking are discussed. The relation of the stability conditions for EXX methods to approaches calculating the Kohn-Sham correlation energy via the adiabatic-connection fluctuation-dissipation (ACFD) theorem is discussed. The existence or nonexistence of singlet-singlet instabilities in an EXX calculation is shown to indicate whether or not the frequency-integration in the evaluation of the correlation energy is singular in the EXX-ACFD method. This method calculates the Kohn-Sham correlation energy through the ACFD theorem theorem employing besides the Coulomb kernel also the full frequency-dependent exchange kernel and yields highly accurate electronic energies. For the case of singular frequency-integrands in the EXX-ACFD method a regularization is suggested. Finally, we present examples of molecular systems for which the self-consistent field procedure of the EXX as well as the Hartree-Fock method can converge to more than one local minimum depending on the initial conditions.
Persson, Karin; Barca, Maria Lage; Cavallin, Lena; Brækhus, Anne; Knapskog, Anne-Brita; Selbæk, Geir; Engedal, Knut
2017-01-01
Background Different clinically feasible methods for evaluation of medial temporal lobe atrophy exists and are useful in diagnostic work-up of Alzheimer's disease (AD). Purpose To compare the diagnostic properties of two clinically available magnetic resonance imaging (MRI)-based methods-an automated volumetric software, NeuroQuant® (NQ) (evaluation of hippocampus volume) and the Scheltens scale (visual evaluation of medial temporal lobe atrophy [MTA])-in patients with AD dementia, and subjective and mild cognitive impairment (non-dementia). Material and Methods MRIs from 56 patients (31 AD, 25 non-dementia) were assessed with both methods. Correlations between the methods were calculated and receiver operating curve (ROC) analyses that yield area under the curve (AUC) statistics were conducted. Results High correlations were found between the two MRI assessments for the total hippocampal volume measured with NQ and mean MTA score (-0.753, P < 0.001), for the right (-0.767, P < 0.001), and for the left (-0.675, P < 0.001) sides. The NQ total measure yielded somewhat higher AUC (0.88, "good") compared to the MTA mean measure (0.80, "good") in the comparison of patients with AD and non-dementia, but the accuracy was in favor of the MTA scale. Conclusion The two methods correlated highly and both methods reached equally "good" power.
All-Versus-Nothing Proof of Einstein-Podolsky-Rosen Steering
Chen, Jing-Ling; Ye, Xiang-Jun; Wu, Chunfeng; Su, Hong-Yi; Cabello, Adán; Kwek, L. C.; Oh, C. H.
2013-01-01
Einstein-Podolsky-Rosen steering is a form of quantum nonlocality intermediate between entanglement and Bell nonlocality. Although Schrödinger already mooted the idea in 1935, steering still defies a complete understanding. In analogy to “all-versus-nothing” proofs of Bell nonlocality, here we present a proof of steering without inequalities rendering the detection of correlations leading to a violation of steering inequalities unnecessary. We show that, given any two-qubit entangled state, the existence of certain projective measurement by Alice so that Bob's normalized conditional states can be regarded as two different pure states provides a criterion for Alice-to-Bob steerability. A steering inequality equivalent to the all-versus-nothing proof is also obtained. Our result clearly demonstrates that there exist many quantum states which do not violate any previously known steering inequality but are indeed steerable. Our method offers advantages over the existing methods for experimentally testing steerability, and sheds new light on the asymmetric steering problem. PMID:23828242
Regression methods for spatially correlated data: an example using beetle attacks in a seed orchard
Preisler Haiganoush; Nancy G. Rappaport; David L. Wood
1997-01-01
We present a statistical procedure for studying the simultaneous effects of observed covariates and unmeasured spatial variables on responses of interest. The procedure uses regression type analyses that can be used with existing statistical software packages. An example using the rate of twig beetle attacks on Douglas-fir trees in a seed orchard illustrates the...
Determining a carbohydrate profile for Hansenula polymorpha
NASA Technical Reports Server (NTRS)
Petersen, G. R.
1985-01-01
The determination of the levels of carbohydrates in the yeast Hansenula polymorpha required the development of new analytical procedures. Existing fractionation and analytical methods were adapted to deal with the problems involved with the lysis of whole cells. Using these new procedures, the complete carbohydrate profiles of H. polymorpha and selected mutant strains were determined and shown to correlate favourably with previously published results.
ERIC Educational Resources Information Center
Krumpe, Kati P.
2012-01-01
With the emphasis on high standards and fiscal accountability, there is a heightened need to inform the research linking student achievement to the allocation of resources. This mixed methods inquiry sought to study how schools utilized Title 1 and Title 1 stimulus funding from 2009-2011 to determine if correlations existed between areas of…
Correlation between the resistivity and the atomic clusters in liquid Cu-Sn alloys
NASA Astrophysics Data System (ADS)
Jia, Peng; Zhang, Jinyang; Hu, Xun; Li, Cancan; Zhao, Degang; Teng, XinYing; Yang, Cheng
2018-05-01
The liquid structure of CuxSn100-x (x = 0, 10, 20, 33, 40, 50, 60, 75, 80 and 100) alloys with atom percentage were investigated with resistivity and viscosity methods. It can be found from the resistivity data that the liquid Cu75Sn25 and Cu80Sn20 alloys had a negative temperature coefficient of resistivity (TCR), and liquid Cu75Sn25 alloy had a minimum value of -9.24 μΩ cm K-1. While the rest of liquid Cu-Sn alloys had a positive TCR. The results indicated that the Cu75Sn25 atomic clusters existed in Cu-Sn alloys. In addition, the method of calculating the percentage of Cu75Sn25 atomic clusters was established on the basis of resistivity theory and the law of conservation of mass. The Cu75Sn25 alloy had a maximum volume of the atomic clusters and a highest activation energy. The results further proved the existence of Cu75Sn25 atomic clusters. Furthermore, the correlation between the liquid structure and the resistivity was established. These results provide a useful reference for the investigation of liquid structure via the sensitive physical properties to the liquid structure.
The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method
Zhang, Yanyan; Wang, Jue
2014-01-01
Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240
NASA Astrophysics Data System (ADS)
Kłos, Jacek; Alexander, Millard H.; Kumar, Praveen; Poirier, Bill; Jiang, Bin; Guo, Hua
2016-05-01
We report new and more accurate adiabatic potential energy surfaces (PESs) for the ground X˜ 1A1 and electronically excited C˜ 1B2(21A') states of the SO2 molecule. Ab initio points are calculated using the explicitly correlated internally contracted multi-reference configuration interaction (icMRCI-F12) method. A second less accurate PES for the ground X ˜ state is also calculated using an explicitly correlated single-reference coupled-cluster method with single, double, and non-iterative triple excitations [CCSD(T)-F12]. With these new three-dimensional PESs, we determine energies of the vibrational bound states and compare these values to existing literature data and experiment.
Bessonov, Kyrylo; Walkey, Christopher J.; Shelp, Barry J.; van Vuuren, Hennie J. J.; Chiu, David; van der Merwe, George
2013-01-01
Analyzing time-course expression data captured in microarray datasets is a complex undertaking as the vast and complex data space is represented by a relatively low number of samples as compared to thousands of available genes. Here, we developed the Interdependent Correlation Clustering (ICC) method to analyze relationships that exist among genes conditioned on the expression of a specific target gene in microarray data. Based on Correlation Clustering, the ICC method analyzes a large set of correlation values related to gene expression profiles extracted from given microarray datasets. ICC can be applied to any microarray dataset and any target gene. We applied this method to microarray data generated from wine fermentations and selected NSF1, which encodes a C2H2 zinc finger-type transcription factor, as the target gene. The validity of the method was verified by accurate identifications of the previously known functional roles of NSF1. In addition, we identified and verified potential new functions for this gene; specifically, NSF1 is a negative regulator for the expression of sulfur metabolism genes, the nuclear localization of Nsf1 protein (Nsf1p) is controlled in a sulfur-dependent manner, and the transcription of NSF1 is regulated by Met4p, an important transcriptional activator of sulfur metabolism genes. The inter-disciplinary approach adopted here highlighted the accuracy and relevancy of the ICC method in mining for novel gene functions using complex microarray datasets with a limited number of samples. PMID:24130853
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei
2014-12-01
Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.
Price-volume multifractal analysis of the Moroccan stock market
NASA Astrophysics Data System (ADS)
El Alaoui, Marwane
2017-11-01
In this paper, we analyzed price-volume multifractal cross-correlations of Moroccan Stock Exchange. We chose the period from January 1st 2000 to January 20th 2017 to investigate the multifractal behavior of price change and volume change series. Then, we used multifractal detrended cross-correlations analysis method (MF-DCCA) and multifractal detrended fluctuation analysis (MF-DFA) to analyze the series. We computed bivariate generalized Hurst exponent, Rényi exponent and spectrum of singularity for each pair of indices to measure quantitatively cross-correlations. Furthermore, we used detrended cross-correlations coefficient (DCCA) and cross-correlation test (Q(m)) to analyze cross-correlation quantitatively and qualitatively. By analyzing results, we found existence of price-volume multifractal cross-correlations. The spectrum width has a strong multifractal cross-correlation. We remarked that volume change series is anti-persistent when we analyzed the generalized Hurst exponent for all moments q. The cross-correlation test showed the presence of a significant cross-correlation. However, DCCA coefficient had a small positive value, which means that the level of correlation is not very significant. Finally, we analyzed sources of multifractality and their degree of contribution in the series.
Testing for genetic association taking into account phenotypic information of relatives.
Uh, Hae-Won; Wijk, Henk Jan van der; Houwing-Duistermaat, Jeanine J
2009-12-15
We investigated efficient case-control association analysis using family data. The outcome of interest was coronary heart disease. We employed existing and new methods that take into account the correlations among related individuals to obtain the proper type I error rates. The methods considered for autosomal single-nucleotide polymorphisms were: 1) generalized estimating equations-based methods, 2) variance-modified Cochran-Armitage (MCA) trend test incorporating kinship coefficients, and 3) genotypic modified quasi-likelihood score test. Additionally, for X-linked single-nucleotide polymorphisms we proposed a two-degrees-of-freedom test. Performance of these methods was tested using Framingham Heart Study 500 k array data.
Night Sky Weather Monitoring System Using Fish-Eye CCD
NASA Astrophysics Data System (ADS)
Tomida, Takayuki; Saito, Yasunori; Nakamura, Ryo; Yamazaki, Katsuya
Telescope Array (TA) is international joint experiment observing ultra-high energy cosmic rays. TA employs fluorescence detection technique to observe cosmic rays. In this technique, tho existence of cloud significantly affects quality of data. Therefore, cloud monitoring provides important information. We are developing two new methods for evaluating night sky weather with pictures taken by charge-coupled device (CCD) camera. One is evaluating the amount of cloud with pixels brightness. The other is counting the number of stars with contour detection technique. The results of these methods show clear correlation, and we concluded both the analyses are reasonable methods for weather monitoring. We discuss reliability of the star counting method.
NASA Astrophysics Data System (ADS)
Ratnasari, D.; Nazir, F.; Toresano, L. O. H. Z.; Pawiro, S. A.; Soejoko, D. S.
2016-03-01
The prevalence of chronic renal diseases in Indonesia has an increasing annual trend, because it is frequently unrecognized and often co-exists with other disease. GFR and ERPF are parameters currently utilized to estimate renal function at routine renal scintigraphy 99m-Tc DTPA study. This study used 99m-Tc DTPA to measure GFR and ERPF. The purpose of this study was to find the correlation between ERPF and GFR, for ERPF analysis with Schlegel's method, and GFR analysis with Gate's method, as well as to find correction factor between both variables. Analysis of renal scintigraphy has been performed at Department of Nuclear Medicine Pertamina Center Hospital to thirty patient images acquired from 2014 to 2015 which were analyzed retrospectively data, using gamma camera dual head with counting method from renal scintigraphy 99m-Tc DTPA study. The calculation was executed by means of both display and manual calculation. Pearson's statistical analysis resulted on Positive Correlation for all data, with ERPF and GFR (display) showing Strongly Positive Correlation (r = 0.82; p- value < 0.05). Standard deviation was found to be 27.58 and 107.64 for GFR and ERPF (display), respectively. Our result indicated that the use of 99mTc-DTPA measure ERPF was not recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J
2011-01-01
Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.
a High Precision dem Extraction Method Based on Insar Data
NASA Astrophysics Data System (ADS)
Wang, Xinshuang; Liu, Lingling; Shi, Xiaoliang; Huang, Xitao; Geng, Wei
2018-04-01
In the 13th Five-Year Plan for Geoinformatics Business, it is proposed that the new InSAR technology should be applied to surveying and mapping production, which will become the innovation driving force of geoinformatics industry. This paper will study closely around the new outline of surveying and mapping and then achieve the TerraSAR/TanDEM data of Bin County in Shaanxi Province in X band. The studying steps are as follows; Firstly, the baseline is estimated from the orbital data; Secondly, the interferometric pairs of SAR image are accurately registered; Thirdly, the interferogram is generated; Fourth, the interferometric correlation information is estimated and the flat-earth phase is removed. In order to solve the phase noise and the discontinuity phase existing in the interferometric image of phase, a GAMMA adaptive filtering method is adopted. Aiming at the "hole" problem of missing data in low coherent area, the interpolation method of low coherent area mask is used to assist the phase unwrapping. Then, the accuracy of the interferometric baseline is estimated from the ground control points. Finally, 1 : 50000 DEM is generated, and the existing DEM data is used to verify the accuracy through statistical analysis. The research results show that the improved InSAR data processing method in this paper can obtain the high-precision DEM of the study area, exactly the same with the topography of reference DEM. The R2 can reach to 0.9648, showing a strong positive correlation.
Measurement of plasma unbound unconjugated bilirubin.
Ahlfors, C E
2000-03-15
A method is described for measuring the unconjugated fraction of the unbound bilirubin concentration in plasma by combining the peroxidase method for determining unbound bilirubin with a diazo method for measuring conjugated and unconjugated bilirubin. The accuracy of the unbound bilirubin determination is improved by decreasing sample dilution, eliminating interference by conjugated bilirubin, monitoring changes in bilirubin concentration using diazo derivatives, and correcting for rate-limiting dissociation of bilirubin from albumin. The unbound unconjugated bilirubin concentration by the combined method in plasma from 20 jaundiced newborns was significantly greater than and poorly correlated with the unbound bilirubin determined by the existing peroxidase method (r = 0.7), possibly due to differences in sample dilution between the methods. The unbound unconjugated bilirubin was an unpredictable fraction of the unbound bilirubin in plasma samples from patients with similar total bilirubin concentrations but varying levels of conjugated bilirubin. A bilirubin-binding competitor was readily detected at a sample dilution typically used for the combined test but not at the dilution used for the existing peroxidase method. The combined method is ideally suited to measuring unbound unconjugated bilirubin in jaundiced human newborns or animal models of kernicterus. Copyright 2000 Academic Press.
Extracting a respiratory signal from raw dynamic PET data that contain tracer kinetics.
Schleyer, P J; Thielemans, K; Marsden, P K
2014-08-07
Data driven gating (DDG) methods provide an alternative to hardware based respiratory gating for PET imaging. Several existing DDG approaches obtain a respiratory signal by observing the change in PET-counts within specific regions of acquired PET data. Currently, these methods do not allow for tracer kinetics which can interfere with the respiratory signal and introduce error. In this work, we produced a DDG method for dynamic PET studies that exhibit tracer kinetics. Our method is based on an existing approach that uses frequency-domain analysis to locate regions within raw PET data that are subject to respiratory motion. In the new approach, an optimised non-stationary short-time Fourier transform was used to create a time-varying 4D map of motion affected regions. Additional processing was required to ensure that the relationship between the sign of the respiratory signal and the physical direction of movement remained consistent for each temporal segment of the 4D map. The change in PET-counts within the 4D map during the PET acquisition was then used to generate a respiratory curve. Using 26 min dynamic cardiac NH3 PET acquisitions which included a hardware derived respiratory measurement, we show that tracer kinetics can severely degrade the respiratory signal generated by the original DDG method. In some cases, the transition of tracer from the liver to the lungs caused the respiratory signal to invert. The new approach successfully compensated for tracer kinetics and improved the correlation between the data-driven and hardware based signals. On average, good correlation was maintained throughout the PET acquisitions.
Chen, Jian-bo; Sun, Su-qin; Zhou, Qun
2015-07-01
The nondestructive and label-free infrared (IR) spectroscopy is a direct tool to characterize the spatial distribution of organic and inorganic compounds in plant. Since plant samples are usually complex mixtures, signal-resolving methods are necessary to find the spectral features of compounds of interest in the signal-overlapped IR spectra. In this research, two approaches using existing data-driven signal-resolving methods are proposed to interpret the IR spectra of plant samples. If the number of spectra is small, "tri-step identification" can enhance the spectral resolution to separate and identify the overlapped bands. First, the envelope bands of the original spectrum are interpreted according to the spectra-structure correlations. Then the spectrum is differentiated to resolve the underlying peaks in each envelope band. Finally, two-dimensional correlation spectroscopy is used to enhance the spectral resolution further. For a large number of spectra, "tri-step decomposition" can resolve the spectra by multivariate methods to obtain the structural and semi-quantitative information about the chemical components. Principal component analysis is used first to explore the existing signal types without any prior knowledge. Then the spectra are decomposed by self-modeling curve resolution methods to estimate the spectra and contents of significant chemical components. At last, targeted methods such as partial least squares target can explore the content profiles of specific components sensitively. As an example, the macroscopic and microscopic distribution of eugenol and calcium oxalate in the bud of clove is studied.
Objectification of perceptual image quality for mobile video
NASA Astrophysics Data System (ADS)
Lee, Seon-Oh; Sim, Dong-Gyu
2011-06-01
This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
Sundaram, Meenakshi; Nayak, Ullal Anand; Ramalingam, Krishnakumar; Reddy, Venugopal; Rao, Arun Prasad; Mathian, Mahesh
2013-01-01
Aims: The aim of this study is to find out whether Oratest can be used as a diagnostic tool in assessing the caries activity by evaluating its relationship to the existing caries status and the salivary streptococcus mutans level. Materials and Methods: The study sample consists of 90 students divided into two groups. Group I (test group) and Group II (control group) consisting of 30 children for control group and 60 children for test group. The sampling of unstimulated saliva for the estimation of streptococcus mutans was done as per the method suggested by Kohler and Bratthall. The plates were then incubated. Rough surface colonies were identified as streptococcus mutans on a pre-determined area of the tip (approximately 1.5 cm2) were counted for each side of spatula pressed against mitis salivarius bacitracin agar using digital colony counter. The results were expressed in colony forming units (CFU). Oratest was carried out in the same patients after the collection of salivary sample for the microbiological method to evaluate the relationship between the two tests. Statistical Analysis Used: The tests used were ANOVA, Pearson Chi-square test, Pearson′s correlation analysis, Mann-Whitney U test and Student′s independent t-test. Results: In the control group and test group, when the streptococcus mutans count (CFU) and Oratest time (minutes) were correlated using Pearson′s correlation analysis, the streptococcus mutans counts was found to be in a statistically significant negative linear relationship with the Oratest time. When the caries status of the children, participated in the test group were correlated with mutans count (CFU) and Oratest time, caries status were found to be in a statistically significant positive linear relationship with streptococcus mutans count and in a significant negative linear relationship with Oratest time. Conclusions: The test proved to be a simple, inexpensive and rapid technique for assessing caries activity since a significant relationship exists clinically with caries status and microbiologically with the streptococcus mutans count of the individual. PMID:23946577
Daily sodium and potassium excretion can be estimated by scheduled spot urine collections.
Doenyas-Barak, Keren; Beberashvili, Ilia; Bar-Chaim, Adina; Averbukh, Zhan; Vogel, Ofir; Efrati, Shai
2015-01-01
The evaluation of sodium and potassium intake is part of the optimal management of hypertension, metabolic syndrome, renal stones, and other conditions. To date, no convenient method for its evaluation exists, as the gold standard method of 24-hour urine collection is cumbersome and often incorrectly performed, and methods that use spot or shorter collections are not accurate enough to replace the gold standard. The aim of this study was to evaluate the correlation and agreement between a new method that uses multiple-scheduled spot urine collection and the gold standard method of 24-hour urine collection. The urine sodium or potassium to creatinine ratios were determined for four scheduled spot urine samples. The mean ratios of the four spot samples and the ratios of each of the single spot samples were corrected for estimated creatinine excretion and compared to the gold standard. A significant linear correlation was demonstrated between the 24-hour urinary solute excretions and estimated excretion evaluated by any of the scheduled spot urine samples. The correlation of the mean of the four spots was better than for any of the single spots. Bland-Altman plots showed that the differences between these measurements were within the limits of agreement. Four scheduled spot urine samples can be used as a convenient method for estimation of 24-hour sodium or potassium excretion. © 2015 S. Karger AG, Basel.
Test-retest stability of the Task and Ego Orientation Questionnaire.
Lane, Andrew M; Nevill, Alan M; Bowes, Neal; Fox, Kenneth R
2005-09-01
Establishing stability, defined as observing minimal measurement error in a test-retest assessment, is vital to validating psychometric tools. Correlational methods, such as Pearson product-moment, intraclass, and kappa are tests of association or consistency, whereas stability or reproducibility (regarded here as synonymous) assesses the agreement between test-retest scores. Indexes of reproducibility using the Task and Ego Orientation in Sport Questionnaire (TEOSQ; Duda & Nicholls, 1992) were investigated using correlational (Pearson product-moment, intraclass, and kappa) methods, repeated measures multivariate analysis of variance, and calculating the proportion of agreement within a referent value of +/-1 as suggested by Nevill, Lane, Kilgour, Bowes, and Whyte (2001). Two hundred thirteen soccer players completed the TEOSQ on two occasions, 1 week apart. Correlation analyses indicated a stronger test-retest correlation for the Ego subscale than the Task subscale. Multivariate analysis of variance indicated stability for ego items but with significant increases in four task items. The proportion of test-retest agreement scores indicated that all ego items reported relatively poor stability statistics with test-retest scores within a range of +/-1, ranging from 82.7-86.9%. By contrast, all task items showed test-retest difference scores ranging from 92.5-99%, although further analysis indicated that four task subscale items increased significantly. Findings illustrated that correlational methods (Pearson product-moment, intraclass, and kappa) are influenced by the range in scores, and calculating the proportion of agreement of test-retest differences with a referent value of +/-1 could provide additional insight into the stability of the questionnaire. It is suggested that the item-by-item proportion of agreement method proposed by Nevill et al. (2001) should be used to supplement existing methods and could be especially helpful in identifying rogue items in the initial stages of psychometric questionnaire validation.
Dynamic Time Warping compared to established methods for validation of musculoskeletal models.
Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael
2017-04-11
By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.
High order neural correlates of social behavior in the honeybee brain.
Duer, Aron; Paffhausen, Benjamin H; Menzel, Randolf
2015-10-30
Honeybees are well established models of neural correlates of sensory function, learning and memory formation. Here we report a novel approach allowing to record high-order mushroom body-extrinsic interneurons in the brain of worker bees within a functional colony. New method The use of two 100 cm long twisted copper electrodes allowed recording of up to four units of mushroom body-extrinsic neurons simultaneously for up to 24h in animals moving freely between members of the colony. Every worker, including the recorded bee, hatched in the experimental environment. The group consisted of 200 animals in average. Animals explored different regions of the comb and interacted with other colony members. The activities of the units were not selective for locations on the comb, body directions with respect to gravity and olfactory signals on the comb, or different social interactions. However, combinations of these parameters defined neural activity in a unit-specific way. In addition, units recorded from the same animal co-varied according to unknown factors. Comparison with existing method(s): All electrophysiological studies with honey bees were performed so far on constrained animals outside their natural behavioral contexts. Yet no neuronal correlates were measured in a social context. Free mobility of recoded insects over a range of a quarter square meter allows addressing questions concerning neural correlates of social communication, planning of tasks within the colony and attention-like processes. The method makes it possible to study neural correlates of social behavior in a near-natural setting within the honeybee colony. Copyright © 2015 Elsevier B.V. All rights reserved.
Anguita, Alberto; García-Remesal, Miguel; Graf, Norbert; Maojo, Victor
2016-04-01
Modern biomedical research relies on the semantic integration of heterogeneous data sources to find data correlations. Researchers access multiple datasets of disparate origin, and identify elements-e.g. genes, compounds, pathways-that lead to interesting correlations. Normally, they must refer to additional public databases in order to enrich the information about the identified entities-e.g. scientific literature, published clinical trial results, etc. While semantic integration techniques have traditionally focused on providing homogeneous access to private datasets-thus helping automate the first part of the research, and there exist different solutions for browsing public data, there is still a need for tools that facilitate merging public repositories with private datasets. This paper presents a framework that automatically locates public data of interest to the researcher and semantically integrates it with existing private datasets. The framework has been designed as an extension of traditional data integration systems, and has been validated with an existing data integration platform from a European research project by integrating a private biological dataset with data from the National Center for Biotechnology Information (NCBI). Copyright © 2016 Elsevier Inc. All rights reserved.
High-resolution electron microscope observation of voids in amorphous Ge.
NASA Technical Reports Server (NTRS)
Donovan, T. M.; Heinemann, K.
1971-01-01
Electron micrographs have been obtained which clearly show the existence of a void network in amorphous Ge films formed at substrate temperatures of 25 and 150 C, and the absence of a void network in films formed at higher substrate temperatures of 200 and 250 C. These results correlate quite well with density measurements and predictions of void densities by indirect methods.
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
Mean stress and the exhaustion of fatigue-damage resistance
NASA Technical Reports Server (NTRS)
Berkovits, Avraham
1989-01-01
Mean-stress effects on fatigue life are critical in isothermal and thermomechanically loaded materials and composites. Unfortunately, existing mean-stress life-prediction methods do not incorporate physical fatigue damage mechanisms. An objective is to examine the relation between mean-stress induced damage (as measured by acoustic emission) and existing life-prediction methods. Acoustic emission instrumentation has indicated that, as with static yielding, fatigue damage results from dislocation buildup and motion until dislocation saturation is reached, after which void formation and coalescence predominate. Correlation of damage processes with similar mechanisms under monotonic loading led to a reinterpretation of Goodman diagrams for 40 alloys and a modification of Morrow's formulation for life prediction under mean stresses. Further testing, using acoustic emission to monitor dislocation dynamics, can generate data for developing a more general model for fatigue under mean stress.
Li, Gang; He, Bin; Huang, Hongwei; Tang, Limin
2016-01-01
The spatial–temporal correlation is an important feature of sensor data in wireless sensor networks (WSNs). Most of the existing works based on the spatial–temporal correlation can be divided into two parts: redundancy reduction and anomaly detection. These two parts are pursued separately in existing works. In this work, the combination of temporal data-driven sleep scheduling (TDSS) and spatial data-driven anomaly detection is proposed, where TDSS can reduce data redundancy. The TDSS model is inspired by transmission control protocol (TCP) congestion control. Based on long and linear cluster structure in the tunnel monitoring system, cooperative TDSS and spatial data-driven anomaly detection are then proposed. To realize synchronous acquisition in the same ring for analyzing the situation of every ring, TDSS is implemented in a cooperative way in the cluster. To keep the precision of sensor data, spatial data-driven anomaly detection based on the spatial correlation and Kriging method is realized to generate an anomaly indicator. The experiment results show that cooperative TDSS can realize non-uniform sensing effectively to reduce the energy consumption. In addition, spatial data-driven anomaly detection is quite significant for maintaining and improving the precision of sensor data. PMID:27690035
Koskinen, Heli I
2010-01-01
The Faculty of Veterinary Medicine at the University of Helsinki recognized the lack of systems to measure the quality of education. At the department level, this meant lack of systems to measure the quality of students' outcomes. The aim of this article was to compare the quality of outcomes of a final examination in veterinary radiology by calculating the correlations between traditional (quantitative scores traditionally given by veterinary teachers) and nontraditional (qualitative Structure of the Observed Learning Outcome, or SOLO, method) grading results. Evaluation of the quality of the questions is also included. The results indicate that SOLO offers criteria for quality evaluation, especially for questions. A correlation of 0.60 (p<0.01) existed between qualitative and quantitative estimations, and a correlation of 0.79 (p<0.01) existed between evaluators, both using traditional scores. Two suggestions for a better system to evaluate quality in the future: First, development of problem-solving skills during the learning process should also be assessed. Second, both the scoring of factual correctness of answers (knowledge) and the grammatical structure of an answer and the quality of presentation should be included in the quality evaluation process.
Hegde, Gautham; Hegde, Nanditha; Kumar, Anil; Keshavaraj
2014-07-01
Orthodontic diagnosis and treatment planning for growing children must involve growth prediction, especially in the treatment of skeletal problems. Studies have shown that a strong association exists between skeletal maturity and dental calcification stages. The present study was therefore taken up to provide a simple and practical method for assessing skeletal maturity using a dental periapical film and standard dental X-ray machine, to compare the developmental stages of the mandibular canine with that of developmental stages of modified MP3 and to find out if any correlation exists, to determine if the developmental stages of the mandibular canine alone can be used as a reliable indicator for assessment of skeletal maturity. A total of 160 periapical radiographs, of the mandibular right canine and the MP3 region was taken and assessed according to the Dermirjian's stages of dental calcification and the modified MP3 stages. The correlation coefficient between MP3 stages and developmental stages of mandibular canine was found to be significant in both male and female groups. When the canine calcification stages were compared with the MP3 stages it was found that with the exception of the D stage of canine calcification the remaining stages showed a very high correlation with the modified MP3 stages. The correlation between the mandibular canine calcification stages, and the MP3 stages was found to be significant. The canine calcification could be used as a sole indicator for assessment of skeletal maturity.
Nayak, Reshma; Nayak, Us Krishna; Hegde, Gautam
2010-01-01
Orthodontic diagnosis and treatment planning for growing children must involve growth prediction, especially in the treatment of skeletal problems. Studies have shown that a strong association exists between skeletal maturity and dental calcification stages. The present study was therefore taken up to provide a simple and practical method for assessing skeletal maturity using a dental periapical film and standard dental X-ray machine, to compare the developmental stages of the mandibular canine with that of developmental stages of modified MP3 and to find out if any correlation exists, to determine if the developmental stages of the mandibular canine alone can be used as a reliable indicator for assessment of skeletal maturity. A total of 160 periapical radiographs (80 males and 80 females), of the mandibular right canine and the MP3 region was taken and assessed according to the Dermirjian's stages of dental calcification and the modified MP3 stages. The correlation between the developmental stages of MP3 and the mandibular right canine in male and female groups, is of high statistical significance (p = 0.001). The correlation coefficient between MP3 stages and developmental stages of mandibular canine and chronological age in male and females was found to be not significant. The correlation between the mandibular canine calcification stages and MP3 stages was found to be significant. The developmental stages of the mandibular canine could be used very reliably as a sole indicator for assessment of skeletal maturity.
Dalmolin, Graziele de Lima; Lunardi, Valéria Lerch; Lunardi, Guilherme Lerch; Barlem, Edison Luiz Devos; da Silveira, Rosemary Silva
2014-01-01
Objective to identify relationships between moral distress and Burnout in the professional performance from the perceptions of the experiences of nursing workers. Methods this is a survey type study with 375 nursing workers working in three different hospitals of southern Rio Grande do Sul, with the application of adaptations of the Moral Distress Scale and the Maslach Burnout Inventory, validated and standardized for use in Brazil. Data validation occurred through factor analysis and Cronbach's alpha. For the data analysis bivariate analysis using Pearson's correlation and multivariate analysis using multiple regression were performed. Results the existence of a weak correlation between moral distress and Burnout was verified. A possible positive correlation between Burnout and therapeutic obstinacy, and a negative correlation between professional fulfillment and moral distress were identified. Conclusion the need was identified for further studies that include mediating and moderating variables that may explain more clearly the models studied. PMID:24553701
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Kinematics of velocity and vorticity correlations in turbulent flow
NASA Technical Reports Server (NTRS)
Bernard, P. S.
1983-01-01
The kinematic problem of calculating second-order velocity moments from given values of the vorticity covariance is examined. Integral representation formulas for second-order velocity moments in terms of the two-point vorticity correlation tensor are derived. The special relationships existing between velocity moments in isotropic turbulence are expressed in terms of the integral formulas yielding several kinematic constraints on the two-point vorticity correlation tensor in isotropic turbulence. Numerical evaluation of these constraints suggests that a Gaussian curve may be the only form of the longitudinal velocity correlation coefficient which is consistent with the requirement of isotropy. It is shown that if this is the case, then a family of exact solutions to the decay of isotropic turbulence may be obtained which contains Batchelor's final period solution as a special case. In addition, the computed results suggest a method of approximating the integral representation formulas in general turbulent shear flows.
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking
Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun
2017-01-01
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876
New advances in the partial-reflection-drifts experiment using microprocessors
NASA Technical Reports Server (NTRS)
Ruggerio, R. L.; Bowhill, S. A.
1982-01-01
Improvements to the partial reflection drifts experiment are completed. The results of the improvements include real time processing and simultaneous measurements of the D region with coherent scatter. Preliminary results indicate a positive correlation between drift velocities calculated by both methods during a two day interval. The possibility now exists for extended observations between partial reflection and coherent scatter. In addition, preliminary measurements could be performed between partial reflection and meteor radar to complete a comparison of methods used to determine velocities in the D region.
[Determination of 10-HDA in honeybee body by HPLC].
Fan, H; He, C; Han, H
1999-05-01
In the present work we found that in the honeybee body there exists an unsaturated fatty acid, trans-10-hydroxy-2-decenoic acid (10-HDA), which was known only to be present in royal jelly. We established the analytical method of 10-HDA in honeybee body by HPLC and simplified the extraction method of 10-HDA. In the optimum conditions the linear range of detection was 10-1,000 ng, the correlation coefficient was 0.9998, the recovery was 96.5%-99.2% and the detectable limit was 0.53 microgram/g.
Does pop music exist? Hierarchical structure in phonographic markets
NASA Astrophysics Data System (ADS)
Buda, Andrzej
2012-11-01
I find a topological arrangement of assets traded in phonographic markets which has associated a meaningful economic taxonomy. I continue using the Minimal Spanning Tree and the correlations between assets, but now outside the stock markets. This is the first attempt to use these methods on phonographic markets where we have artists instead of stocks. The value of an artist is defined by record sales. The graph is obtained starting from the matrix of correlation coefficients computed between the world’s most popular 30 artists by considering the synchronous time evolution of the difference of the logarithm of weekly record sales. This method provides the hierarchical structure of the phonographic market and information on which music genre is meaningful according to customers. Statistical properties (including the Hurst exponent) of weekly record sales in the phonographic market are also discussed.
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gao, Yan; Liu, Yuyou; Ma, Yifan; Cheng, Xiaobin; Yang, Jun
2018-11-01
One major challenge currently facing pipeline networks across the world is the improvement of leak detection technologies in urban environments. There is an imperative to locate accurately leaks in buried water pipes to avoid serious environmental, social and economic consequences. Much attention has been paid to time delay estimation (TDE) in determining the position of a leak by utilising cross-correlation, which has been proven to be effective with varying degrees of success over the past half century. Previous research in published literature has demonstrated the effectiveness of the pre-whitening process for accentuating the peak in the cross-correlation associated with the time delay. This paper is concerned with the implementation of the differentiation process for TDE, with particular focus on the problem of determining a leak in pipelines by means of pipe pressure measurements. Rather than the pre-whitening operation, the proposed cross-correlation via the differentiation process, termed here DIF, changes the characteristics of the pipe system so that the pipe effectively acts as a band-pass filter. This method has the potential to eliminate some ambiguity caused by the interference at low frequencies and to allow more high frequency information to pass. Given an appropriate differentiation order, a more pronounced and reliable peak is obtained in the cross-correlation result. The use of differentiation process may provide a viable cross-correlation method suited to water leak detection. Its performance in relation to leak detection is further compared to the basic cross-correlation and pre-whitening methods for TDE in detecting a leak from actual PVC water pipes. Experimental results are presented to show an additional property of the DIF compensating for the resonance effects that may exist in cross-spectral density measurements, and hence better performance for TDE.
Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.
Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K
2018-03-13
Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.
Two-point correlators revisited: fast and slow scales in multifield models of inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghersi, José T. Gálvez; Frolov, Andrei V., E-mail: joseg@sfu.ca, E-mail: frolov@sfu.ca
2017-05-01
We study the structure of two-point correlators of the inflationary field fluctuations in order to improve the accuracy and efficiency of the existing methods to calculate primordial spectra. We present a description motivated by the separation of the fast and slow evolving components of the spectrum which is based on Cholesky decomposition of the field correlator matrix. Our purpose is to rewrite all the relevant equations of motion in terms of slowly varying quantities. This is important in order to consider the contribution from high-frequency modes to the spectrum without affecting computational performance. The slow-roll approximation is not required tomore » reproduce the main distinctive features in the power spectrum for each specific model of inflation.« less
Quantitative optical scanning tests of complex microcircuits
NASA Technical Reports Server (NTRS)
Erickson, J. J.
1980-01-01
An approach for the development of the optical scanner as a screening inspection instrument for microcircuits involves comparing the quantitative differences in photoresponse images and then correlating them with electrical parameter differences in test devices. The existing optical scanner was modified so that the photoresponse data could be recorded and subsequently digitized. A method was devised for applying digital image processing techniques to the digitized photoresponse data in order to quantitatively compare the data. Electrical tests were performed and photoresponse images were recorded before and following life test intervals on two groups of test devices. Correlations were made between differences or changes in the electrical parameters of the test devices.
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
NASA Technical Reports Server (NTRS)
Sopher, R.; Twomey, W. J.
1990-01-01
NASA-Langley is sponsoring a rotorcraft structural dynamics program with the objective to establish in the U.S. a superior capability to utilize finite element analysis models for calculations to support industrial design of helicopter airframe structures. In the initial phase of the program, teams from the major U.S. manufacturers of helicopter airframes will apply extant finite element analysis methods to calculate loads and vibrations of helicopter airframes, and perform correlations between analysis and measurements. The aforementioned rotorcraft structural dynamics program was given the acronym DAMVIBS (Design Analysis Method for Vibrations). Sikorsky's RDYNE Rotorcraft Dynamics Analysis used for the correlation study, the specifics of the application of RDYNE to the AH-1G, and comparisons of the predictions of the method with flight data for loads and vibrations on the AH-1G are described. RDYNE was able to predict trends of variations of loads and vibrations with airspeed, but in some instances magnitudes differed from measured results by factors of two or three to one. Sensitivities were studied of predictions to rotor inflow modeling, effects of torsional modes, number of blade bending modes, fuselage structural damping, and hub modal content.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
Zam, Azhar; Dsouza, Roshan; Subhash, Hrebesh M; O'Connell, Marie-Louise; Enfield, Joey; Larin, Kirill; Leahy, Martin J
2013-09-01
We propose the use of correlation mapping optical coherence tomography (cmOCT) to deliver additional biometrics associated with the finger that could complement existing fingerprint technology for law enforcement applications. The current study extends the existing fingerprint paradigm by measuring additional biometrics associated with sub-surface finger tissue such as sub-surface fingerprints, sweat glands, and the pattern of the capillary bed to yield a user-friendly cost effective and anti-spoof multi-mode biometric solution associated with the finger. To our knowledge no other method has been able to capture sub-surface fingerprint, papillary pattern and horizontal vessel pattern in a single scan or to show the correspondence between these patterns in live adult human fingertip. Unlike many current technologies this approach incorporates 'liveness' testing by default. The ultimate output is a biometric module which is difficult to defeat and complements fingerprint scanners that currently are used in border control and law enforcement applications. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Membrane filtration method for enumeration and isolation of Alicyclobacillus spp. from apple juice.
Lee, S-Y; Chang, S-S; Shin, J-H; Kang, D-H
2007-11-01
To evaluate the applicability of filtration membranes for detecting Alicyclobacillus spp. spores in apple juice. Ten types of nitrocellulose membrane filters from five manufacturers were used to collect and enumerate five Alicyclobacillus spore isolates and results were compared to conventional K agar plating. Spore recovery differed among filters with an average recovery rate of 126.2%. Recovery levels also differed among spore isolates. Although significant difference (P < 0.05) in spore sizes existed, no correlation could be determined between spore size and membrane filter recovery rate. Recovery of spores using membrane filtration is dependent on the manufacturer and filter pore size. Correlations between spore recovery rate and spore size could not be determined. Low numbers of Alicyclobacillus spores in juice can be effectively detected using membrane filtration although recovery rate differences exist among different manufacturers. Use of membrane filtration is a simple, fast alternative to the week-long enrichment procedures currently employed in most quality assurance tests.
Sports Participation and Positive Correlates in African American, Latino, and White Girls
Duncan, Susan C.; Strycker, Lisa A.; Chaumeton, Nigel R.
2015-01-01
Purpose The purpose of the study was to examine relations among sports participation and positive correlates across African American, Latino, and white girls. Positive correlate variables were self-perceptions (self-worth, body attractiveness, athletic competence), less depression, and participation in extracurricular activities. Methods The sample comprised 372 girls (mean age = 12.03 years). Data were analyzed using multiple-sample structural equation models, controlling for age and income. Results Across all ethnic groups, greater sports participation was significantly related to higher self-worth, body attractiveness, and athletic competence, and to more extracurricular activity. Among Latino and white girls only, greater sports participation also was related to less depression. There were significant age and income influences on the positive correlates. Conclusions Findings confirm the existence of significant relationships between organized sports participation and positive correlates among early adolescent African American, Latino, and white girls. Despite a few ethnic differences in relationships, the current study revealed more similarities than differences. PMID:26692758
Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen
2016-08-18
The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.
Chen, Zhe; Song, John; Chu, Wei; Soons, Johannes A; Zhao, Xuezeng
2017-11-01
The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for accurate firearm evidence identification and error rate estimation. The CMC method is based on the principle of discretization. The toolmark image of the reference sample is divided into correlation cells. Each cell is registered to the cell-sized area of the compared image that has maximum surface topography similarity. For each resulting cell pair, one parameter quantifies the similarity of the cell surface topography and three parameters quantify the pattern congruency of the registration position and orientation. An identification (declared match) requires a significant number of CMCs, that is, cell pairs that meet both similarity and pattern congruency requirements. The use of cell correlations reduces the effects of "invalid regions" in the compared image pairs and increases the correlation accuracy. The identification accuracy of the CMC method can be further improved by considering a feature named "convergence," that is, the tendency of the x-y registration positions of the correlated cell pairs to converge at the correct registration angle when comparing same-source samples at different relative orientations. In this paper, the difference of the convergence feature between known matching (KM) and known non-matching (KNM) image pairs is characterized, based on which an improved algorithm is developed for breech face image correlations using the CMC method. Its advantage is demonstrated by comparison with three existing CMC algorithms using four datasets. The datasets address three different brands of consecutively manufactured pistol slides, with significant differences in the distribution overlap of cell pair topography similarity for KM and KNM image pairs. For the same CMC threshold values, the convergence algorithm demonstrates noticeably improved results by reducing the number of false-positive or false-negative CMCs in a comparison. Published by Elsevier B.V.
Rahman, Md Mostafizur; Fattah, Shaikh Anowarul
2017-01-01
In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.
Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies
Liu, Zhonghua; Lin, Xihong
2017-01-01
Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391
Multiple phenotype association tests using summary statistics in genome-wide association studies.
Liu, Zhonghua; Lin, Xihong
2018-03-01
We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.
A semiparametric separation curve approach for comparing correlated ROC data from multiple markers
Tang, Liansheng Larry; Zhou, Xiao-Hua
2012-01-01
In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360
[Bioinorganic chemical composition of the lens and methods of its investigation].
Avetisov, S E; Novikov, I A; Pakhomova, N A; Motalov, V G
2018-01-01
Bioinorganic chemical composition of the lens of human and experimental animals (cows, dogs, rats, rabbits) have been analyzed in various studies. In most cases, the studies employed different methods to determine the gross (total) composition of chemical elements and their concentrations in the examined samples. Less frequently, they included an assessment of the distribution of chemical elements in the lens and correlation of their concentration with its morphological changes. Chemical elements from all groups (series) of the periodic classification system were discovered in the lens substance. Despite similar investigation methods, different authors obtained contradicting results on the chemical composition of the lens. This article presents data suggesting possible correlation between inorganic chemical elements in the lens substance with the development and formation of lenticular opacities. All currently employed methods are known to only analyze limited number of select chemical elements in the tissues and do not consider the whole range of elements that can be analyzed with existing technology; furthermore, the majority of studies are conducted on the animal model lens. Therefore, it is feasible to continue the development of the chemical microanalysis method by increasing the sensitivity of Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM/EDS) with the purpose of assessing the gross chemical composition and distribution of the elements in the lens substance, as well as revealing possible correlation between element concentration and morphological changes in the lens.
The Basic Principles and Methods of the System Approach to Compression of Telemetry Data
NASA Astrophysics Data System (ADS)
Levenets, A. V.
2018-01-01
The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.
NASA Technical Reports Server (NTRS)
Murphy, Patrick C. (Technical Monitor); Klein, Vladislav
2005-01-01
The program objectives are fully defined in the original proposal entitled Program of Research in Flight Dynamics in GW at NASA Langley Research Center, which was originated March 20, 1975, and in the renewals of the research program from January 1, 2003 to September 30, 2005. The program in its present form includes three major topics: 1. the improvement of existing methods and development of new methods for wind tunnel and flight data analysis, 2. the application of these methods to wind tunnel and flight test data obtained from advanced airplanes, 3. the correlation of flight results with wind tunnel measurements, and theoretical predictions.
Midthune, Douglas; Dodd, Kevin W.; Freedman, Laurence S.; Krebs-Smith, Susan M.; Subar, Amy F.; Guenther, Patricia M.; Carroll, Raymond J.; Kipnis, Victor
2007-01-01
Objective We propose a new statistical method that uses information from two 24-hour recalls (24HRs) to estimate usual intake of episodically-consumed foods. Statistical Analyses Performed The method developed at the National Cancer Institute (NCI) accommodates the large number of non-consumption days that arise with foods by separating the probability of consumption from the consumption-day amount, using a two-part model. Covariates, such as sex, age, race, or information from a food frequency questionnaire (FFQ), may supplement the information from two or more 24HRs using correlated mixed model regression. The model allows for correlation between the probability of consuming a food on a single day and the consumption-day amount. Percentiles of the distribution of usual intake are computed from the estimated model parameters. Results The Eating at America's Table Study (EATS) data are used to illustrate the method to estimate the distribution of usual intake for whole grains and dark green vegetables for men and women and the distribution of usual intakes of whole grains by educational level among men. A simulation study indicates that the NCI method leads to substantial improvement over existing methods for estimating the distribution of usual intake of foods. Applications/Conclusions The NCI method provides distinct advantages over previously proposed methods by accounting for the correlation between probability of consumption and amount consumed and by incorporating covariate information. Researchers interested in estimating the distribution of usual intakes of foods for a population or subpopulation are advised to work with a statistician and incorporate the NCI method in analyses. PMID:17000190
The consumer quality index anthroposophic healthcare: a construction and validation study.
Koster, Evi B; Ong, Rob R S; Heybroek, Rachel; Delnoij, Diana M J; Baars, Erik W
2014-04-02
Accounting for the patients' perspective on quality of care has become increasingly important in the development of Evidence Based Medicine as well as in governmental policies. In the Netherlands the Consumer Quality (CQ) Index has been developed to measure the quality of care from the patients' perspective in different healthcare sectors in a standardized manner. Although the scientific accountability of anthroposophic healthcare as a form of integrative medicine is growing, patient experiences with anthroposophic healthcare have not been measured systematically. In addition, the specific anthroposophic aspects are not measured by means of existing CQ Indexes. To enable accountability of quality of the anthroposophic healthcare from the patients' perspective the aim of this study is the construction and validation of a CQ Index for anthroposophic healthcare. Construction in three phases: Phase 1. Determining anthroposophic quality aspects: literature study and focus groups. Phase 2. Adding new questions and validating the new questionnaire. Research population: random sample from 7910 patients of 22 anthroposophic GPs. survey, mixed mode by means of the Dillman method. Measuring instrument: experience questionnaire: CQ Index General Practice (56 items), added with 27 new anthroposophic items added and an item-importance questionnaire (anthroposophic items only). Factor analysis, scale construction, internal consistency (Chronbach's Alpha), inter-item-correlation, discriminative ability (Intra Class Correlation) and inter-factor-correlations. Phase 3. Modulation and selection of new questions based on results. Criteria of retaining items: general: a limited amount of items, statistical: part of a reliable scale and inter-item-correlation <0,7, and theoretical. Phase 1. 27 anthroposophic items. Phase 2. Two new anthroposophic scales: Scale AntroposophicTreatmentGP: seven items, Alpha=0,832, ICC=4,2 Inter-factor-correlation with existing GP-scales range from r=0,24 (Accessibility) to r=0,56 (TailoredCare). Scale InteractionalStyleGP: five items, Alpha=0,810, ICC=5,8, Inter-factor-correlation with existing GP-scales range from r=0,32 (Accessibility) to r=0,76 (TailoredCare). Inter-factor-correlation between new scales: r=0,50. Phase 3: Adding both scales and four single items. Removing eleven items and reformulating two items. The CQ Index Anthroposophic Healthcare measures patient experiences with anthroposophic GP's validly and reliably. Regarding the inter-factor-correlations anthroposophic quality aspects from the patients' perspective are mostly associated with individually tailored care and patient centeredness.
Vibrational cross sections for positron scattering by nitrogen molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazon, K. T.; Tenfen, W.; Michelin, S. E.
2010-09-15
We present a systematic study of low-energy positron collision with nitrogen molecules. Vibrational elastic and excitation cross sections are calculated using the multichannel version of the continued fractions method in the close-coupling scheme for the positron incident energy up to 20 eV. The interaction potential is treated within the static-correlation-polarization approximation. The comparison of our calculated data with existing theoretical and experimental results is encouraging.
VanderKraats, Nathan D.; Hiken, Jeffrey F.; Decker, Keith F.; Edwards, John R.
2013-01-01
Methylation of the CpG-rich region (CpG island) overlapping a gene’s promoter is a generally accepted mechanism for silencing expression. While recent technological advances have enabled measurement of DNA methylation and expression changes genome-wide, only modest correlations between differential methylation at gene promoters and expression have been found. We hypothesize that stronger associations are not observed because existing analysis methods oversimplify their representation of the data and do not capture the diversity of existing methylation patterns. Recently, other patterns such as CpG island shore methylation and long partially hypomethylated domains have also been linked with gene silencing. Here, we detail a new approach for discovering differential methylation patterns associated with expression change using genome-wide high-resolution methylation data: we represent differential methylation as an interpolated curve, or signature, and then identify groups of genes with similarly shaped signatures and corresponding expression changes. Our technique uncovers a diverse set of patterns that are conserved across embryonic stem cell and cancer data sets. Overall, we find strong associations between these methylation patterns and expression. We further show that an extension of our method also outperforms other approaches by generating a longer list of genes with higher quality associations between differential methylation and expression. PMID:23748561
Gibbons, Laura E; Crane, Paul K; Mehta, Kala M; Pedraza, Otto; Tang, Yuxiao; Manly, Jennifer J; Narasimhalu, Kaavya; Teresi, Jeanne; Jones, Richard N; Mungas, Dan
2011-04-28
Differential item functioning (DIF) occurs when a test item has different statistical properties in subgroups, controlling for the underlying ability measured by the test. DIF assessment is necessary when evaluating measurement bias in tests used across different language groups. However, other factors such as educational attainment can differ across language groups, and DIF due to these other factors may also exist. How to conduct DIF analyses in the presence of multiple, correlated factors remains largely unexplored. This study assessed DIF related to Spanish versus English language in a 44-item object naming test. Data come from a community-based sample of 1,755 Spanish- and English-speaking older adults. We compared simultaneous accounting, a new strategy for handling differences in educational attainment across language groups, with existing methods. Compared to other methods, simultaneously accounting for language- and education-related DIF yielded salient differences in some object naming scores, particularly for Spanish speakers with at least 9 years of education. Accounting for factors that vary across language groups can be important when assessing language DIF. The use of simultaneous accounting will be relevant to other cross-cultural studies in cognition and in other fields, including health-related quality of life.
Gibbons, Laura E.; Crane, Paul K.; Mehta, Kala M.; Pedraza, Otto; Tang, Yuxiao; Manly, Jennifer J.; Narasimhalu, Kaavya; Teresi, Jeanne; Jones, Richard N.; Mungas, Dan
2012-01-01
Differential item functioning (DIF) occurs when a test item has different statistical properties in subgroups, controlling for the underlying ability measured by the test. DIF assessment is necessary when evaluating measurement bias in tests used across different language groups. However, other factors such as educational attainment can differ across language groups, and DIF due to these other factors may also exist. How to conduct DIF analyses in the presence of multiple, correlated factors remains largely unexplored. This study assessed DIF related to Spanish versus English language in a 44-item object naming test. Data come from a community-based sample of 1,755 Spanish- and English-speaking older adults. We compared simultaneous accounting, a new strategy for handling differences in educational attainment across language groups, with existing methods. Compared to other methods, simultaneously accounting for language- and education-related DIF yielded salient differences in some object naming scores, particularly for Spanish speakers with at least 9 years of education. Accounting for factors that vary across language groups can be important when assessing language DIF. The use of simultaneous accounting will be relevant to other cross-cultural studies in cognition and in other fields, including health-related quality of life. PMID:22900138
NASA Astrophysics Data System (ADS)
Meyer, Uwe; Fries, Elke; Frei, Michaela
2016-04-01
Soil is one of the most precious resources on Earth. Preserving, using and enriching soils are most complex processes that fundamentally need a sound regional data base. Many countries lack this sort of extensive data or the existing data must be urgently updated when land use recently changed in major patterns. The project "RECHARBO" (Regional Characterization of Soil Properties) aims at the combination of methods from remote sensing, geophysics and geopedology in order to develop a new system to map soils on a regional scale in a quick and efficient manner. First tests will be performed on existing soil monitoring districts, using newly available sensing systems as well as established techniques. Especially hyperspectral and infrared data measured from satellites or airborne platforms shall be combined. Moreover, a systematic correlation between hyperspectral imagery and gamma-ray spectroscopy shall be established. These recordings will be compared and correlated to measurements upon ground and on soil samples to get hold of properties such as soil moisture, soil density, specific resistance plus analytic properties like clay content, anorganic background, organic matter etc. The goal is to generate a system that enables users to map soil patterns on a regional scale using airborne or satellite data and to fix their characteristics with only a limited number of soil samples.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors.
Ma, Xiaolei; Luan, Sen; Du, Bowen; Yu, Bin
2017-09-21
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks.
Tran, Hanh T M; Stephenson, Steven L; Tullis, Jason A
2015-01-01
The conventional method used to assess growth of the plasmodium of the slime mold Physarum polycephalum in solid culture is to measure the extent of plasmodial expansion from the point of inoculation by using a ruler. However, plasmodial growth is usually rather irregular, so the values obtained are not especially accurate. Similar challenges exist in quantification of the growth of a fungal mycelium. In this paper, we describe a method that uses geographic information system software to obtain highly accurate estimates of plasmodial growth over time. This approach calculates plasmodial area from images obtained at particular intervals following inoculation. In addition, the correlation between plasmodial area and its dry cell weight value was determined. The correlation could be used for biomass estimation without the need of having to terminate the cultures in question. The method described herein is simple but effective and could also be used for growth measurements of other microorganisms such as fungi on solid media.
NASA Astrophysics Data System (ADS)
Chang, C.; Wang, J.; Liu, S.; Shao, M.; Zhang, Y.; Zhu, T.; Shiu, C.; Lai, C.
2010-12-01
Two on-site continuous measurements of ozone and its precursors in two megacities of China were carried out in an urban site of Beijing and a suburban site near Guangzhou in the Pearl River Delta (PRD) to estimate precursor consumption and to assess its relationship with oxidant (O3+NO2) formation level. An observation-based method (OBM) with the precursor consumption concept was adopted to assess the relationship between oxidant production and amounts of photochemically consumed non-methane hydrocarbons (NMHCs). In this approach, the ratio of ethylbenzene to m,p-xylenes was used to estimate the degree of photochemical processing, as well as the amounts of photochemically consumed NMHCs by reacting with OH. By trying to correlate the observed oxidant with the observed NMHC concentration, the two areas both revealed nearly no to low correlation between them. However, it existed fair to good correlations (R2=0.68 for Beijing, 0.53 for PRD) between the observed oxidant level and the degree of photochemical processing (ethylbenzene/m,p-xylenes). Furthermore, after taking the approach of consumption to estimate the consumed amounts of NMHCs, an interesting finding reveals that the definite correlation existed between the observed oxidant level and the total consumed NMHCs. The good correlations (R2=0.83 for Beijing, 0.81 for PRD) implies that the ambient oxidant level correlated to the amount of consumed NMHCs. The results of the two megacities in China by using the OBM with the precursor consumption concept can provide another pathway to explore the relationship between photochemically produced oxidant and consumed precursors, and will be helpful to validate model results and to reduce uncertainty of model predictions. However, the method has some room for uncertainty, as injection of fresh precursor emissions and additional boundary ozone involved, etc. could affect the estimation of consumed NMHCs and observed oxidant levels. Assistance of approaches in assessing the influence of the interfering factors would be helpful to acquire more reliable inferences of relationship between oxidant formation and precursor consumption.
A comparison of four streamflow record extension techniques
Hirsch, Robert M.
1982-01-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
A Comparison of Four Streamflow Record Extension Techniques
NASA Astrophysics Data System (ADS)
Hirsch, Robert M.
1982-08-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., `line of organic correlation,' `reduced major axis,' `unique solution,' and `equivalence line.' The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
Robust clustering of languages across Wikipedia growth
NASA Astrophysics Data System (ADS)
Ban, Kristina; Perc, Matjaž; Levnajić, Zoran
2017-10-01
Wikipedia is the largest existing knowledge repository that is growing on a genuine crowdsourcing support. While the English Wikipedia is the most extensive and the most researched one with over 5 million articles, comparatively little is known about the behaviour and growth of the remaining 283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here, we use a subset of these data, consisting of 14 962 different articles, each of which exists in 26 different languages, from Arabic to Ukrainian. We study the growth of Wikipedias in these languages over a time span of 15 years. We show that, while an average article follows a random path from one language to another, there exist six well-defined clusters of Wikipedias that share common growth patterns. The make-up of these clusters is remarkably robust against the method used for their determination, as we verify via four different clustering methods. Interestingly, the identified Wikipedia clusters have little correlation with language families and groups. Rather, the growth of Wikipedia across different languages is governed by different factors, ranging from similarities in culture to information literacy.
Sanni, Steinar; Lyng, Emily; Pampanin, Daniela M
2017-06-01
Offshore oil and gas activities are required not to cause adverse environmental effects, and risk based management has been established to meet environmental standards. In some risk assessment schemes, Risk Indicators (RIs) are parameters to monitor the development of risk affecting factors. RIs have not yet been established in the Environmental Risk Assessment procedures for management of oil based discharges offshore. This paper evaluates the usefulness of biomarkers as RIs, based on their properties, existing laboratory biomarker data and assessment methods. Data shows several correlations between oil concentrations and biomarker responses, and assessment principles exist that qualify biomarkers for integration into risk procedures. Different ways that these existing biomarkers and methods can be applied as RIs in a probabilistic risk assessment system when linked with whole organism responses are discussed. This can be a useful approach to integrate biomarkers into probabilistic risk assessment related to oil based discharges, representing a potential supplement to information that biomarkers already provide about environmental impact and risk related to these kind of discharges. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust clustering of languages across Wikipedia growth.
Ban, Kristina; Perc, Matjaž; Levnajić, Zoran
2017-10-01
Wikipedia is the largest existing knowledge repository that is growing on a genuine crowdsourcing support. While the English Wikipedia is the most extensive and the most researched one with over 5 million articles, comparatively little is known about the behaviour and growth of the remaining 283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here, we use a subset of these data, consisting of 14 962 different articles, each of which exists in 26 different languages, from Arabic to Ukrainian. We study the growth of Wikipedias in these languages over a time span of 15 years. We show that, while an average article follows a random path from one language to another, there exist six well-defined clusters of Wikipedias that share common growth patterns. The make-up of these clusters is remarkably robust against the method used for their determination, as we verify via four different clustering methods. Interestingly, the identified Wikipedia clusters have little correlation with language families and groups. Rather, the growth of Wikipedia across different languages is governed by different factors, ranging from similarities in culture to information literacy.
Robust clustering of languages across Wikipedia growth
Ban, Kristina; Levnajić, Zoran
2017-01-01
Wikipedia is the largest existing knowledge repository that is growing on a genuine crowdsourcing support. While the English Wikipedia is the most extensive and the most researched one with over 5 million articles, comparatively little is known about the behaviour and growth of the remaining 283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here, we use a subset of these data, consisting of 14 962 different articles, each of which exists in 26 different languages, from Arabic to Ukrainian. We study the growth of Wikipedias in these languages over a time span of 15 years. We show that, while an average article follows a random path from one language to another, there exist six well-defined clusters of Wikipedias that share common growth patterns. The make-up of these clusters is remarkably robust against the method used for their determination, as we verify via four different clustering methods. Interestingly, the identified Wikipedia clusters have little correlation with language families and groups. Rather, the growth of Wikipedia across different languages is governed by different factors, ranging from similarities in culture to information literacy. PMID:29134106
NASA Astrophysics Data System (ADS)
Heid, T.; Kääb, A.
2011-12-01
Automatic matching of images from two different times is a method that is often used to derive glacier surface velocity. Nearly global repeat coverage of the Earth's surface by optical satellite sensors now opens the possibility for global-scale mapping and monitoring of glacier flow with a number of applications in, for example, glacier physics, glacier-related climate change and impact assessment, and glacier hazard management. The purpose of this study is to compare and evaluate different existing image matching methods for glacier flow determination over large scales. The study compares six different matching methods: normalized cross-correlation (NCC), the phase correlation algorithm used in the COSI-Corr software, and four other Fourier methods with different normalizations. We compare the methods over five regions of the world with different representative glacier characteristics: Karakoram, the European Alps, Alaska, Pine Island (Antarctica) and southwest Greenland. Landsat images are chosen for matching because they expand back to 1972, they cover large areas, and at the same time their spatial resolution is as good as 15 m for images after 1999 (ETM+ pan). Cross-correlation on orientation images (CCF-O) outperforms the three similar Fourier methods, both in areas with high and low visual contrast. NCC experiences problems in areas with low visual contrast, areas with thin clouds or changing snow conditions between the images. CCF-O has problems on narrow outlet glaciers where small window sizes (about 16 pixels by 16 pixels or smaller) are needed, and it also obtains fewer correct matches than COSI-Corr in areas with low visual contrast. COSI-Corr has problems on narrow outlet glaciers and it obtains fewer correct matches compared to CCF-O when thin clouds cover the surface, or if one of the images contains snow dunes. In total, we consider CCF-O and COSI-Corr to be the two most robust matching methods for global-scale mapping and monitoring of glacier velocities. If combining CCF-O with locally adaptive template sizes and by filtering the matching results automatically by comparing the displacement matrix to its low pass filtered version, the matching process can be automated to a large degree. This allows the derivation of glacier velocities with minimal (but not without!) user interaction and hence also opens up the possibility of global-scale mapping and monitoring of glacier flow.
Comparison of Soil Quality Index Using Three Methods
Mukherjee, Atanu; Lal, Rattan
2014-01-01
Assessment of management-induced changes in soil quality is important to sustaining high crop yield. A large diversity of cultivated soils necessitate identification development of an appropriate soil quality index (SQI) based on relative soil properties and crop yield. Whereas numerous attempts have been made to estimate SQI for major soils across the World, there is no standard method established and thus, a strong need exists for developing a user-friendly and credible SQI through comparison of various available methods. Therefore, the objective of this article is to compare three widely used methods to estimate SQI using the data collected from 72 soil samples from three on-farm study sites in Ohio. Additionally, challenge lies in establishing a correlation between crop yield versus SQI calculated either depth wise or in combination of soil layers as standard methodology is not yet available and was not given much attention to date. Predominant soils of the study included one organic (Mc), and two mineral (CrB, Ko) soils. Three methods used to estimate SQI were: (i) simple additive SQI (SQI-1), (ii) weighted additive SQI (SQI-2), and (iii) statistically modeled SQI (SQI-3) based on principal component analysis (PCA). The SQI varied between treatments and soil types and ranged between 0–0.9 (1 being the maximum SQI). In general, SQIs did not significantly differ at depths under any method suggesting that soil quality did not significantly differ for different depths at the studied sites. Additionally, data indicate that SQI-3 was most strongly correlated with crop yield, the correlation coefficient ranged between 0.74–0.78. All three SQIs were significantly correlated (r = 0.92–0.97) to each other and with crop yield (r = 0.65–0.79). Separate analyses by crop variety revealed that correlation was low indicating that some key aspects of soil quality related to crop response are important requirements for estimating SQI. PMID:25148036
A fully automated system for quantification of background parenchymal enhancement in breast DCE-MRI
NASA Astrophysics Data System (ADS)
Ufuk Dalmiş, Mehmet; Gubern-Mérida, Albert; Borelli, Cristina; Vreemann, Suzan; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
Novel method for fog monitoring using cellular networks infrastructures
NASA Astrophysics Data System (ADS)
David, N.; Alpert, P.; Messer, H.
2012-08-01
A major detrimental effect of fog is visibility limitation which can result in serious transportation accidents, traffic delays and therefore economic damage. Existing monitoring techniques including satellites, transmissometers and human observers - suffer from low spatial resolution, high cost or lack of precision when measuring near ground level. Here we show a novel technique for fog monitoring using wireless communication systems. Communication networks widely deploy commercial microwave links across the terrain at ground level. Operating at frequencies of tens of GHz they are affected by fog and are, effectively, an existing, spatially world-wide distributed sensor network that can provide crucial information about fog concentration and visibility. Fog monitoring potential is demonstrated for a heavy fog event that took place in Israel. The correlation between transmissomters and human eye observations to the visibility estimates from the nearby microwave links was found to be 0.53 and 0.61, respectively. These values indicate the high potential of the proposed method.
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
Factors affecting quality of social interaction park in Jakarta
NASA Astrophysics Data System (ADS)
Mangunsong, N. I.
2018-01-01
The existence of social interactions park in Jakarta is an oasis in the middle of a concrete jungle. Parks is a response to the need for open space as a place of recreation and community interaction. Often the social interaction parks built by the government does not function as expected, but other functions such as a place to sell, trash, unsafe so be rarely visited by visitors. The purpose of this study was to analyze the factors that affect the quality of social interaction parks in Jakarta by conducting descriptive analysis and correlation analysis of the variables assessment. The results of the analysis can give an idea of social interactions park based on community needs and propose the development of social interactioncity park. The object of study are 25 social interaction parks in 5 municipalities of Jakarta. The method used is descriptive analysis method, correlation analysis using SPSS 19 and using crosstab, chi-square tests. The variables are 5 aspects of Design, Plants composition: Selection type of plant (D); the beauty and harmony (Ind); Maintenance and fertility (P); Cleanliness and Environmental Health (BS); Specificity (Drainage, Multi Function garden, Means, Concern/Mutual cooperation, in dense settlements) (K). The results of analysis show that beauty is the most significant correlation with the value of the park followed by specificity, cleanliness and maintenance. Design was not the most significant variable affecting the quality of the park. The results of this study can be used by the Department of Parks and Cemeteries as input in managing park existing or to be developed and to improve the quality of social interaction park in Jakarta.
Yu, Chunhong; Yi, Jinglin; Yin, Xiaolong; Deng, Yan; Liao, Yujun; Li, Xiaobing
2015-01-01
Aim: This study was aimed to detect the correlation of nitric oxide synthase 3 (NOS3) gene polymorphisms (T-786C and G894T) and retinopathy of prematurity (ROP) susceptibility. Interaction between NOS3 gene polymorphisms and the duration of oxygen therapy was also explored in ROP babies. Methods: Genotypes of NOS3 gene polymorphisms were genotyped by MassArray method. Hardy-Weinberg equilibrium (HWE) was used to calculate the representativeness of the cases and controls. Crossover analysis was utilized to explore the gene environment interactions. Relative risk of ROP was presented by odds ratios (ORs) with corresponding 95% confidence intervals (95% CIs). Results: Among the subject features, oxygen therapy had obvious difference between case and control groups (P<0.05). There existed significant association between-786C allele and ROP susceptibility (P=0.049, OR=0.669, 95% CI=0.447-0.999). Genotypes of T-786C polymorphism and genotypes and alleles of G894T polymorphism did not related to the susceptibility of ROP. Interactions were existed between NOS3 gene polymorphisms and oxygen therapy duration. When the duration of oxygen therapy was less than 17 days, both -786CC genotype and 894GT genotype were correlated with ROP susceptibility (P=0.020, OR=0.115, 95% CI=0.014-0.960; P=0.011, OR=0.294, 95% CI=0.100-0.784). Conclusion: -786C allele might have a protective effect for ROP. Interactions of -786CC and 894GT genotype with oxygen therapy duration (less than 17 days) were both protection factors of ROP. PMID:26823875
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eng, L.A.; Metz, C.B.
1986-09-01
The meaningful detection of antisperm antibody in immunologically infertile females has been confounded by the many methods of assay that exist. With many of these methods there is poor correlation of assay results with infertility. In this report, female rabbits were rendered partially or completely infertile by immunization with sperm fractions. A filter radioassay for antisperm antibody was developed that consists of incubating 10(7) sperm with sperm from immunized rabbits and /sup 14/C-Protein A, a long-lived and versatile indirect radiolabel for many antibodies of the IgG class. The spermatozoa are washed by rapid vacuum filtration on polycarbonate membrane filters insteadmore » of by time-consuming centrifugation. The filters with the collected spermatozoa are then counted in a liquid scintillation counter. Sera from female rabbits isoimmunized with sperm antigens show a highly significant correlation (r = -0.904; p less than 0.001) between assay results and infertility as measured by the percentage of eggs that underwent cleavage after artificial insemination.« less
Design of off-statistics axial-flow fans by means of vortex law optimization
NASA Astrophysics Data System (ADS)
Lazari, Andrea; Cattanei, Andrea
2014-12-01
Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier
2012-01-01
Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785
Local Linear Regression for Data with AR Errors.
Li, Runze; Li, Yan
2009-07-01
In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
McCartan, L.; Owens, J.P.; Blackwelder, B. W.; Szabo, B. J.; Belknap, D.F.; Kriausakul, N.; Mitterer, R.M.; Wehmiller, J.F.
1982-01-01
The results of an integrated study comprising litho- and biostratigraphic investigations, uranium-series coral dating, amino acid racemization in molluscs, and paleomagnetic measurements are compared to ascertain relative and absolute ages of Pleistocene deposits of the Atlantic Coastal Plain in North and South Carolina. Four depositional events are inferred for South Carolina and two for North Carolina by all methods. The data suggest that there are four Pleistocene units containing corals that have been dated at about 100,000 yr, 200,000 yr, 450,000 yr, and over 1,000,000 yr. Some conflicts exist between the different methods regarding the correlation of the younger of these depositional events between Charleston and Myrtle Beach. Lack of good uranium-series dates for the younger material at Myrtle Beach makes the correlation with the deposits at Charleston more difficult. ?? 1982.
Palten, Patricia E; Dudenhausen, Joachim W
2010-11-01
we evaluated what German-speaking women in Berlin know about umbilical cord blood banking (UCBB) and whether a correlation exists between women's knowledge about UCBB and level of education. we used the anonymous questionnaire given to German-speaking women in Berlin, Germany. a total of 300 questionnaires could be evaluated. Although three quarters of our population heard of UCBB, most had no further knowledge about the method. Only about one-third of the interviewed women were informed about whether certain diseases had been treated with umbilical cord blood (UCB) by the time the survey was being conducted, whereas 50-65% did not know how to answer these questions. women in Berlin were poorly educated about the usefulness, the costs and the methods of cryopreservation. To some extent there is a correlation between women's level of education and their knowledge regarding UCB.
Class attendance, peer similarity, and academic performance in a large field study
Bjerre-Nielsen, Andreas; Mones, Enys; Lehmann, Sune; Lassen, David Dreyer
2017-01-01
Identifying the factors that determine academic performance is an essential part of educational research. Existing research indicates that class attendance is a useful predictor of subsequent course achievements. The majority of the literature is, however, based on surveys and self-reports, methods which have well-known systematic biases that lead to limitations on conclusions and generalizability as well as being costly to implement. Here we propose a novel method for measuring class attendance that overcomes these limitations by using location and bluetooth data collected from smartphone sensors. Based on measured attendance data of nearly 1,000 undergraduate students, we demonstrate that early and consistent class attendance strongly correlates with academic performance. In addition, our novel dataset allows us to determine that attendance among social peers was substantially correlated (>0.5), suggesting either an important peer effect or homophily with respect to attendance. PMID:29117190
Class attendance, peer similarity, and academic performance in a large field study.
Kassarnig, Valentin; Bjerre-Nielsen, Andreas; Mones, Enys; Lehmann, Sune; Lassen, David Dreyer
2017-01-01
Identifying the factors that determine academic performance is an essential part of educational research. Existing research indicates that class attendance is a useful predictor of subsequent course achievements. The majority of the literature is, however, based on surveys and self-reports, methods which have well-known systematic biases that lead to limitations on conclusions and generalizability as well as being costly to implement. Here we propose a novel method for measuring class attendance that overcomes these limitations by using location and bluetooth data collected from smartphone sensors. Based on measured attendance data of nearly 1,000 undergraduate students, we demonstrate that early and consistent class attendance strongly correlates with academic performance. In addition, our novel dataset allows us to determine that attendance among social peers was substantially correlated (>0.5), suggesting either an important peer effect or homophily with respect to attendance.
Feature Extraction and Selection Strategies for Automated Target Recognition
NASA Technical Reports Server (NTRS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-01-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Feature extraction and selection strategies for automated target recognition
NASA Astrophysics Data System (ADS)
Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2010-04-01
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Damage estimation of sewer pipe using subtitles of CCTV inspection video
NASA Astrophysics Data System (ADS)
Park, Kitae; Kim, Byeongcheol; Kim, Taeheon; Seo, Dongwoo
2017-04-01
Recent frequent occurrence of urban sinkhole serves as a momentum of the periodic inspection of sewer pipelines. Sewer inspection using a CCTV device needs a lot of time and efforts. Many of previous studies which reduce the laborious tasks are mainly interested in the developments of image processing S/W and exploring H/W. And there has been no attempt to find meaningful information from the existing CCTV images stored by the sewer maintenance manager. This study adopts a cross-correlation based image processing method and extracts sewer inspection device's location data from CCTV images. As a result of the analysis of location-time relation, it show strong correlation between device stand time and the sewer damages. In case of using this method to investigate sewer inspection CCTV images, it will save the investigator's efforts and improve sewer maintenance efficiency and reliability.
Cavitation in liquid cryogens. 3: Ogives
NASA Technical Reports Server (NTRS)
Hord, J.
1973-01-01
Experimental results for three, scaled, quarter-caliber ogives are given. Both desinent and developed cavity data, using liquid hydrogen and liquid nitrogen, are reported. The desinent data do not exhibit a consistent ogive size effect, but the developed cavity data were consistently influenced by ogive size; B-factor increases with increasing ogive diameter. The developed cavity data indicated that stable thermodynamic equilibrium exists throughout the vaporous cavities. These data were correlated by using the extended theory derived in NASA-CR-2156 (volume II of this report series). The new correlating parameter MTWO, improves data correlation for the ogives, hydrofoil, and venturi and appears attractive for future predictive applications. The cavitation coefficient and equipment size effects are shown to vary with specific equipment-fluid combinations. A method of estimating cavitation coefficient from knowledge of the noncavitating pressure coefficient is suggested.
Kalman Filtering for Genetic Regulatory Networks with Missing Values
Liu, Qiuhua; Lai, Tianyue; Wang, Wu
2017-01-01
The filter problem with missing value for genetic regulation networks (GRNs) is addressed, in which the noises exist in both the state dynamics and measurement equations; furthermore, the correlation between process noise and measurement noise is also taken into consideration. In order to deal with the filter problem, a class of discrete-time GRNs with missing value, noise correlation, and time delays is established. Then a new observation model is proposed to decrease the adverse effect caused by the missing value and to decouple the correlation between process noise and measurement noise in theory. Finally, a Kalman filtering is used to estimate the states of GRNs. Meanwhile, a typical example is provided to verify the effectiveness of the proposed method, and it turns out to be the case that the concentrations of mRNA and protein could be estimated accurately. PMID:28814967
NASA Astrophysics Data System (ADS)
Qin, Meng; Ren, Zhong-Zhou; Zhang, Xin
2016-01-01
In this study, the global quantum correlation, monogamy relation and quantum phase transition of the Heisenberg XXZ model are investigated by the method of quantum renormalization group. We obtain, analytically, the expressions of the global negativity, the global measurement-induced disturbance and the monogamy relation for the system. The result shows that for a three-site block state, the partial transpose of an asymmetric block can get stronger entanglement than that of the symmetric one. The residual entanglement and the difference of the monogamy relation of measurement-induced disturbance show a scaling behavior with the size of the system becoming large. Moreover, the monogamy nature of entanglement measured by negativity exists in the model, while the nonclassical correlation quantified by measurement-induced disturbance violates the monogamy relation and demonstrates polygamy.
Ultrasonic ranking of toughness of tungsten carbide
NASA Technical Reports Server (NTRS)
Vary, A.; Hull, D. R.
1983-01-01
The feasibility of using ultrasonic attenuation measurements to rank tungsten carbide alloys according to their fracture toughness was demonstrated. Six samples of cobalt-cemented tungsten carbide (WC-Co) were examined. These varied in cobalt content from approximately 2 to 16 weight percent. The toughness generally increased with increasing cobalt content. Toughness was first determined by the Palmqvist and short rod fracture toughness tests. Subsequently, ultrasonic attenuation measurements were correlated with both these mechanical test methods. It is shown that there is a strong increase in ultrasonic attenuation corresponding to increased toughness of the WC-Co alloys. A correlation between attenuation and toughness exists for a wide range of ultrasonic frequencies. However, the best correlation for the WC-Co alloys occurs when the attenuation coefficient measured in the vicinity of 100 megahertz is compared with toughness as determined by the Palmqvist technique.
Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.
Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh
2018-04-27
Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.
Correlating Detergent Fiber Analysis and Dietary Fiber Analysis Data for Corn Stover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfrum, E. J.; Lorenz, A. J.; deLeon, N.
There exist large amounts of detergent fiber analysis data [neutral detergent fiber (NDF), acid detergent fiber (ADF), acid detergent lignin (ADL)] for many different potential cellulosic ethanol feedstocks, since these techniques are widely used for the analysis of forages. Researchers working in the area of cellulosic ethanol are interested in the structural carbohydrates in a feedstock (principally glucan and xylan), which are typically determined by acid hydrolysis of the structural fraction after multiple extractions of the biomass. These so-called dietary fiber analysis methods are significantly more involved than detergent fiber analysis methods. The purpose of this study was to determinemore » whether it is feasible to correlate detergent fiber analysis values to glucan and xylan content determined by dietary fiber analysis methods for corn stover. In the detergent fiber analysis literature cellulose is often estimated as the difference between ADF and ADL, while hemicellulose is often estimated as the difference between NDF and ADF. Examination of a corn stover dataset containing both detergent fiber analysis data and dietary fiber analysis data predicted using near infrared spectroscopy shows that correlations between structural glucan measured using dietary fiber techniques and cellulose estimated using detergent techniques, and between structural xylan measured using dietary fiber techniques and hemicellulose estimated using detergent techniques are high, but are driven largely by the underlying correlation between total extractives measured by fiber analysis and NDF/ADF. That is, detergent analysis data is correlated to dietary fiber analysis data for structural carbohydrates, but only indirectly; the main correlation is between detergent analysis data and solvent extraction data produced during the dietary fiber analysis procedure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millis, Andrew
Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less
A comparison of gantry-mounted x-ray-based real-time target tracking methods.
Montanaro, Tim; Nguyen, Doan Trang; Keall, Paul J; Booth, Jeremy; Caillet, Vincent; Eade, Thomas; Haddad, Carol; Shieh, Chun-Chien
2018-03-01
Most modern radiotherapy machines are built with a 2D kV imaging system. Combining this imaging system with a 2D-3D inference method would allow for a ready-made option for real-time 3D tumor tracking. This work investigates and compares the accuracy of four existing 2D-3D inference methods using both motion traces inferred from external surrogates and measured internally from implanted beacons. Tumor motion data from 160 fractions (46 thoracic/abdominal patients) of Synchrony traces (inferred traces), and 28 fractions (7 lung patients) of Calypso traces (internal traces) from the LIGHT SABR trial (NCT02514512) were used in this study. The motion traces were used as the ground truth. The ground truth trajectories were used in silico to generate 2D positions projected on the kV detector. These 2D traces were then passed to the 2D-3D inference methods: interdimensional correlation, Gaussian probability density function (PDF), arbitrary-shape PDF, and the Kalman filter. The inferred 3D positions were compared with the ground truth to determine tracking errors. The relationships between tracking error and motion magnitude, interdimensional correlation, and breathing periodicity index (BPI) were also investigated. Larger tracking errors were observed from the Calypso traces, with RMS and 95th percentile 3D errors of 0.84-1.25 mm and 1.72-2.64 mm, compared to 0.45-0.68 mm and 0.74-1.13 mm from the Synchrony traces. The Gaussian PDF method was found to be the most accurate, followed by the Kalman filter, the interdimensional correlation method, and the arbitrary-shape PDF method. Tracking error was found to strongly and positively correlate with motion magnitude for both the Synchrony and Calypso traces and for all four methods. Interdimensional correlation and BPI were found to negatively correlate with tracking error only for the Synchrony traces. The Synchrony traces exhibited higher interdimensional correlation than the Calypso traces especially in the anterior-posterior direction. Inferred traces often exhibit higher interdimensional correlation, which are not true representation of thoracic/abdominal motion and may underestimate kV-based tracking errors. The use of internal traces acquired from systems such as Calypso is advised for future kV-based tracking studies. The Gaussian PDF method is the most accurate 2D-3D inference method for tracking thoracic/abdominal targets. Motion magnitude has significant impact on 2D-3D inference error, and should be considered when estimating kV-based tracking error. © 2018 American Association of Physicists in Medicine.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Analysis of high aspect ratio jet flap wings of arbitrary geometry.
NASA Technical Reports Server (NTRS)
Lissaman, P. B. S.
1973-01-01
Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.
Correlation between ocular parameters and amplitude of accommodation
Abraham, Lekha Mary; Kuriakose, Thomas; Sivanandam, Viswanathan; Venkatesan, Nithya; Thomas, Ravi; Muliyil, Jayaprakash
2010-01-01
Aim: To study the relationship between ocular parameters and amplitude of accommodation (AA) in the peri-presbyopic age group (35–50 years). Materials and Methods: Three hundred and sixteen right eyes of consecutive patients in the age group 35–50 years, who attended our outpatient clinic, were studied. Emmetropes, hypermetropes and myopes with best-corrected visual acuity of 20/20, J1 in both eyes were included. The AA was calculated by measuring the near point of accommodation. The axial length (AL), central anterior chamber depth (CACD) and lens thickness (LT) were also measured. Results: There was moderate correlation (Pearson’s correlation coefficient r = 0.56) between AL and AA as well as between CACD and AA (r = 0.53) in myopes in the age group 35–39 years. In the other age groups and the groups taken as a whole, there was no correlation. In hypermetropes and emmetropes, there was no correlation between AA and the above ocular parameters. No significant correlation existed between LT and AA across different age groups and refractive errors. Conclusion: There was no significant correlation between AA and ocular parameters like anterior chamber depth, AL and LT. PMID:20952831
Koga, D; Chian, A C-L; Hada, T; Rempel, E L
2008-02-13
Magnetohydrodynamic (MHD) turbulence is commonly observed in the solar wind. Nonlinear interactions among MHD waves are likely to produce finite correlation of the wave phases. For discussions of various transport processes of energetic particles, it is fundamentally important to determine whether the wave phases are randomly distributed (as assumed in the quasi-linear theory) or have a finite coherence. Using a method based on the surrogate data technique, we analysed the GEOTAIL magnetic field data to evaluate the phase coherence in MHD turbulence in the Earth's foreshock region. The results demonstrate the existence of finite phase correlation, indicating that nonlinear wave-wave interactions are in progress.
Correlation parameters for the study of leeside heating on a lifting body at hypersonic speeds
NASA Technical Reports Server (NTRS)
Vidal, R. J.
1974-01-01
Leeside heating was studied with the aim of gaining some insight into: (1) the magnitude of the leeside heating rates and (2) the methods to be used to extrapolate wind tunnel leeside heating rates to the full scale flight condition. This study was based on existing experimental data obtained in a hypersonic shock tunnel on lifting body configurations that are typical of shuttle orbiter vehicles. Heat transfer was first measured on the windward side to determine the boundary layer type. Then the leeside heating was investigated with the classified boundary layer. Correlation data are given on the windward turbulent boundary layer, the windward laminar boundary layer, and the leeside surfaces.
Non-linear characteristics and long-range correlations in Asian stock markets
NASA Astrophysics Data System (ADS)
Jiang, J.; Ma, K.; Cai, X.
2007-05-01
We test several non-linear characteristics of Asian stock markets, which indicates the failure of efficient market hypothesis and shows the essence of fractal of the financial markets. In addition, by using the method of detrended fluctuation analysis (DFA) to investigate the long range correlation of the volatility in the stock markets, we find that the crossover phenomena exist in the results of DFA. Further, in the region of small volatility, the scaling behavior is more complicated; in the region of large volatility, the scaling exponent is close to 0.5, which suggests the market is more efficient. All these results may indicate the possibility of characteristic multifractal scaling behaviors of the financial markets.
LES, DNS, and RANS for the Analysis of High-Speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Colucci, P. J.; Jaberi, F. A.; Givi, P.
1996-01-01
A filtered density function (FDF) method suitable for chemically reactive flows is developed in the context of large eddy simulation. The advantage of the FDF methodology is its inherent ability to resolve subgrid scales (SGS) scalar correlations that otherwise have to be modeled. Because of the lack of robust models to accurately predict these correlations in turbulent reactive flows, simulations involving turbulent combustion are often met with a degree of skepticism. The FDF methodology avoids the closure problem associated with these terms and treats the reaction in an exact manner. The scalar FDF approach is particularly attractive since it can be coupled with existing hydrodynamic computational fluid dynamics (CFD) codes.
Covariate-adjusted Spearman's rank correlation with probability-scale residuals.
Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E
2018-06-01
It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.
FGWAS: Functional genome wide association analysis.
Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-10-01
Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.
2011-01-01
Background The numerous diverse metabolic pathways by which plant compounds can be produced make it difficult to predict how colour pigmentation is lost for different tissues and plants. This study employs mathematical and in silico methods to identify correlated gene targets for the loss of colour pigmentation in plants from a whole cell perspective based on the full metabolic network of Arabidopsis. This involves extracting a self-contained flavonoid subnetwork from the AraCyc database and calculating feasible metabolic routes or elementary modes (EMs) for it. Those EMs leading to anthocyanin compounds are taken to constitute the anthocyanin biosynthetic pathway (ABP) and their interplay with the rest of the EMs is used to study the minimal cut sets (MCSs), which are different combinations of reactions to block for eliminating colour pigmentation. By relating the reactions to their corresponding genes, the MCSs are used to explore the phenotypic roles of the ABP genes, their relevance to the ABP and the impact their eliminations would have on other processes in the cell. Results Simulation and prediction results of the effect of different MCSs for eliminating colour pigmentation correspond with existing experimental observations. Two examples are: i) two MCSs which require the simultaneous suppression of genes DFR and ANS to eliminate colour pigmentation, correspond to observational results of the same genes being co-regulated for eliminating floral pigmentation in Aquilegia and; ii) the impact of another MCS requiring CHS suppression, corresponds to findings where the suppression of the early gene CHS eliminated nearly all flavonoids but did not affect the production of volatile benzenoids responsible for floral scent. Conclusions From the various MCSs identified for eliminating colour pigmentation, several correlate to existing experimental observations, indicating that different MCSs are suitable for different plants, different cells, and different conditions and could also be related to regulatory genes. Being able to correlate the predictions with experimental results gives credence to the use of these mathematical and in silico analyses methods in the design of experiments. The methods could be used to prioritize target enzymes for different objectives to achieve desired outcomes, especially for less understood pathways. PMID:21849042
Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines
Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu
2016-01-01
In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved. PMID:27136561
Correlates of insight into different symptom dimensions in obsessive-compulsive disorder.
Fontenelle, Jùlia M; Harrison, Ben J; Santana, Lívia; Conceição do Rosário, Maria; Versiani, Marcio; Fontenelle, Leonardo F
2013-02-01
In this study, we evaluated insight into different obsessive-compulsive disorder (OCD) symptom dimensions and their impact on clinical and sociodemographic features of patients with OCD. Sixty OCD patients were assessed with the Brown Assessment of Beliefs Scale (BABS), the Dimensional Yale-Brown Obsessive-Compulsive Scale-Short Version, the Beck Depression Inventory, and the Sheehan Disability Scale. Two methods of using BABS were employed: 1) a traditional approach, which considers a composite of the insight into existing OCD symptoms, and 2) an alternative approach, which includes assessments of insight into each OCD symptom dimension separately. Composite BABS scores correlated with global severity of OCD and depressive symptoms, and degree of interference on social life/leisure activities and family life/home responsibilities. Dimension-specific correlations between severity of symptoms and insight ranged from very high (P = .87, for hoarding) to moderate (P = .61, for miscellaneous symptoms). Greater severity of depression and concomitant generalized anxiety disorder were independently associated with lower levels of insight into aggressive/checking symptoms. While earlier-onset OCD was associated with lower insight into sexual/religious and symmetry symptoms, later-onset OCD displayed lower insight into hoarding. Assessing insight into dimension-specific OCD symptoms may challenge the existence of clear-cut OCD with fair or poor insight.
Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines.
Liu, Liansheng; Liu, Datong; Zhang, Yujie; Peng, Yu
2016-04-29
In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved.
Universal and idiosyncratic characteristic lengths in bacterial genomes
NASA Astrophysics Data System (ADS)
Junier, Ivan; Frémont, Paul; Rivoire, Olivier
2018-05-01
In condensed matter physics, simplified descriptions are obtained by coarse-graining the features of a system at a certain characteristic length, defined as the typical length beyond which some properties are no longer correlated. From a physics standpoint, in vitro DNA has thus a characteristic length of 300 base pairs (bp), the Kuhn length of the molecule beyond which correlations in its orientations are typically lost. From a biology standpoint, in vivo DNA has a characteristic length of 1000 bp, the typical length of genes. Since bacteria live in very different physico-chemical conditions and since their genomes lack translational invariance, whether larger, universal characteristic lengths exist is a non-trivial question. Here, we examine this problem by leveraging the large number of fully sequenced genomes available in public databases. By analyzing GC content correlations and the evolutionary conservation of gene contexts (synteny) in hundreds of bacterial chromosomes, we conclude that a fundamental characteristic length around 10–20 kb can be defined. This characteristic length reflects elementary structures involved in the coordination of gene expression, which are present all along the genome of nearly all bacteria. Technically, reaching this conclusion required us to implement methods that are insensitive to the presence of large idiosyncratic genomic features, which may co-exist along these fundamental universal structures.
Functional connectivity change as shared signal dynamics
Cole, Michael W.; Yang, Genevieve J.; Murray, John D.; Repovš, Grega; Anticevic, Alan
2015-01-01
Background An increasing number of neuroscientific studies gain insights by focusing on differences in functional connectivity – between groups, individuals, temporal windows, or task conditions. We found using simulations that additional insights into such differences can be gained by forgoing variance normalization, a procedure used by most functional connectivity measures. Simulations indicated that these functional connectivity measures are sensitive to increases in independent fluctuations (unshared signal) in time series, consistently reducing functional connectivity estimates (e.g., correlations) even though such changes are unrelated to corresponding fluctuations (shared signal) between those time series. This is inconsistent with the common notion of functional connectivity as the amount of inter-region interaction. New Method Simulations revealed that a version of correlation without variance normalization – covariance – was able to isolate differences in shared signal, increasing interpretability of observed functional connectivity change. Simulations also revealed cases problematic for non-normalized methods, leading to a “covariance conjunction” method combining the benefits of both normalized and non-normalized approaches. Results We found that covariance and covariance conjunction methods can detect functional connectivity changes across a variety of tasks and rest in both clinical and non-clinical functional MRI datasets. Comparison with Existing Method(s) We verified using a variety of tasks and rest in both clinical and non-clinical functional MRI datasets that it matters in practice whether correlation, covariance, or covariance conjunction methods are used. Conclusions These results demonstrate the practical and theoretical utility of isolating changes in shared signal, improving the ability to interpret observed functional connectivity change. PMID:26642966
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Landi, S.; Held, H. R.
1965-01-01
Tuberculin purified protein derivative (PPD) has been prepared by seven different precipitation methods from culture filtrate of Mycobacterium tuberculosis var. hominis. It was found to contain 48 to 99% tuberculoprotein, depending on the method of precipitation. The remaining percentage is represented by nucleic acid, polysaccharide, and ash. Activation analysis on tuberculin PPD and on tubercle bacilli has revealed the presence of trace elements. The molecular weight of tuberculin PPD has been found to be of the order of 14,800 to 27,800. The biological activity of tuberculin PPD varies from lot to lot and from method to method. A correlation between its molecular weight and its biological activity seems to exist. Images Fig. 1 Fig. 3 PMID:14325869
HHV Predicting Correlations for Torrefied Biomass Using Proximate and Ultimate Analyses
Nhuchhen, Daya Ram; Afzal, Muhammad T.
2017-01-01
Many correlations are available in the literature to predict the higher heating value (HHV) of raw biomass using the proximate and ultimate analyses. Studies on biomass torrefaction are growing tremendously, which suggest that the fuel characteristics, such as HHV, proximate analysis and ultimate analysis, have changed significantly after torrefaction. Such changes may cause high estimation errors if the existing HHV correlations were to be used in predicting the HHV of torrefied biomass. No study has been carried out so far to verify this. Therefore, this study seeks answers to the question: “Can the existing correlations be used to determine the HHV of the torrefied biomass”? To answer this, the existing HHV predicting correlations were tested using torrefied biomass data points. Estimation errors were found to be significantly high for the existing HHV correlations, and thus, they are not suitable for predicting the HHV of the torrefied biomass. New correlations were then developed using data points of torrefied biomass. The ranges of reported data for HHV, volatile matter (VM), fixed carbon (FC), ash (ASH), carbon (C), hydrogen (H) and oxygen (O) contents were 14.90 MJ/kg–33.30 MJ/kg, 13.30%–88.57%, 11.25%–82.74%, 0.08%–47.62%, 35.08%–86.28%, 0.53%–7.46% and 4.31%–44.70%, respectively. Correlations with the minimum mean absolute errors and having all components of proximate and ultimate analyses were selected for future use. The selected new correlations have a good accuracy of prediction when they are validated using another set of data (26 samples). Thus, these new and more accurate correlations can be useful in modeling different thermochemical processes, including combustion, pyrolysis and gasification processes of torrefied biomass. PMID:28952487
Herschlag, Gregory J; Mitran, Sorin; Lin, Guang
2015-06-21
We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.
NASA Technical Reports Server (NTRS)
Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.
1989-01-01
Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.
Measurement of breast density with digital breast tomosynthesis—a systematic review
McEntee, M F
2014-01-01
Digital breast tomosynthesis (DBT) has gained acceptance as an adjunct to digital mammography in screening. Now that breast density reporting is mandated in several states in the USA, it is increasingly important that the methods of breast density measurement be robust, reliable and consistent. Breast density assessment with DBT needs some consideration since quantitative methods are modelled for two-dimensional (2D) mammography. A review of methods used for breast density assessment with DBT was performed. Existing evidence shows Cumulus has better reproducibility than that of the breast imaging reporting and data system (BI-RADS®) but still suffers from subjective variability; MedDensity is limited by image noise, whilst Volpara and Quantra are robust and consistent. The reported BI-RADs inter-reader breast density agreement (k) ranged from 0.65 to 0.91, with inter-reader correlation (r) ranging from 0.70 to 0.93. The correlation (r) between BI-RADS and Cumulus ranged from 0.54–0.94, whilst that of BI-RADs and MedDensity ranged from 0.48–0.78. The reported agreement (k) between BI-RADs and Volpara is 0.953. Breast density correlation between DBT and 2D mammography ranged from 0.73 to 0.97, with agreement (k) ranging from 0.56 to 0.96. To avoid variability and provide more reliable breast density information for clinicians, automated volumetric methods are preferred. PMID:25146640
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
Noise Intensity-Intensity Correlations and the Fourth Cumulant of Photo-assisted Shot Noise
NASA Astrophysics Data System (ADS)
Forgues, Jean-Charles; Sane, Fatou Bintou; Blanchard, Simon; Spietz, Lafe; Lupien, Christian; Reulet, Bertrand
2013-10-01
We report the measurement of the fourth cumulant of current fluctuations in a tunnel junction under both dc and ac (microwave) excitation. This probes the non-Gaussian character of photo-assisted shot noise. Our measurement reveals the existence of correlations between noise power measured at two different frequencies, which corresponds to two-mode intensity correlations in optics. We observe positive correlations, i.e. photon bunching, which exist only for certain relations between the excitation frequency and the two detection frequencies, depending on the dc bias of the sample.
Covariance Matrix Estimation for Massive MIMO
NASA Astrophysics Data System (ADS)
Upadhya, Karthik; Vorobyov, Sergiy A.
2018-04-01
We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2013-01-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K). PMID:25685493
Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa
2014-03-01
The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).
Choi, Jin-Il; Jost-Brinkmann, Paul-Georg; Choi, Dong-Soon; Jang, In-San
2012-01-01
Objective The purpose of this study was to evaluate the validity of the 3-dimensional (3D) superimposition method of digital models in patients who received treatment with rapid maxillary expansion (RME) and maxillary protraction headgear. Methods The material consisted of pre- and post-treatment maxillary dental casts and lateral cephalograms of 30 patients, who underwent RME and maxillary protraction headgear treatment. Digital models were superimposed using the palate as a reference area. The movement of the maxillary central incisor and the first molar was measured on superimposed cephalograms and 3D digital models. To determine whether any difference existed between the 2 measuring techniques, intra-class correlation (ICC) and Bland-Altman plots were analyzed. Results The measurements on the 3D digital models and cephalograms showed a very high correlation in the antero-posterior direction (ICC, 0.956 for central incisor and 0.941 for first molar) and a moderate correlation in the vertical direction (ICC, 0.748 for central incisor and 0.717 for first molar). Conclusions The 3D model superimposition method using the palate as a reference area is as clinically reliable for assessing antero-posterior tooth movement as cephalometric superimposition, even in cases treated with orthopedic appliances, such as RME and maxillary protraction headgear. PMID:23173116
Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.
Dai, Guoxian; Xie, Jin; Fang, Yi
2018-07-01
How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.
Detecting subnetwork-level dynamic correlations.
Yan, Yan; Qiu, Shangzhao; Jin, Zhuxuan; Gong, Sihong; Bai, Yun; Lu, Jianwei; Yu, Tianwei
2017-01-15
The biological regulatory system is highly dynamic. The correlations between many functionally related genes change over different biological conditions. Finding dynamic relations on the existing biological network may reveal important regulatory mechanisms. Currently no method is available to detect subnetwork-level dynamic correlations systematically on the genome-scale network. Two major issues hampered the development. The first is gene expression profiling data usually do not contain time course measurements to facilitate the analysis of dynamic relations, which can be partially addressed by using certain genes as indicators of biological conditions. Secondly, it is unclear how to effectively delineate subnetworks, and define dynamic relations between them. Here we propose a new method named LANDD (Liquid Association for Network Dynamics Detection) to find subnetworks that show substantial dynamic correlations, as defined by subnetwork A is concentrated with Liquid Association scouting genes for subnetwork B. The method produces easily interpretable results because of its focus on subnetworks that tend to comprise functionally related genes. Also, the collective behaviour of genes in a subnetwork is a much more reliable indicator of underlying biological conditions compared to using single genes as indicators. We conducted extensive simulations to validate the method's ability to detect subnetwork-level dynamic correlations. Using a real gene expression dataset and the human protein-protein interaction network, we demonstrate the method links subnetworks of distinct biological processes, with both confirmed relations and plausible new functional implications. We also found signal transduction pathways tend to show extensive dynamic relations with other functional groups. The R package is available at https://cran.r-project.org/web/packages/LANDD CONTACTS: yunba@pcom.edu, jwlu33@hotmail.com or tianwei.yu@emory.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The Dark Matter Crisis: Falsification of the Current Standard Model of Cosmology
NASA Astrophysics Data System (ADS)
Kroupa, P.
2012-06-01
The current standard model of cosmology (SMoC) requires The Dual Dwarf Galaxy Theorem to be true according to which two types of dwarf galaxies must exist: primordial dark-matter (DM) dominated (type A) dwarf galaxies, and tidal-dwarf and ram-pressure-dwarf (type B) galaxies void of DM. Type A dwarfs surround the host approximately spherically, while type B dwarfs are typically correlated in phase-space. Type B dwarfs must exist in any cosmological theory in which galaxies interact. Only one type of dwarf galaxy is observed to exist on the baryonic Tully-Fisher plot and in the radius-mass plane. The Milky Way satellite system forms a vast phase-space-correlated structure that includes globular clusters and stellar and gaseous streams. Other galaxies also have phase-space correlated satellite systems. Therefore, The Dual Dwarf Galaxy Theorem is falsified by observation and dynamically relevant cold or warm DM cannot exist. It is shown that the SMoC is incompatible with a large set of other extragalactic observations. Other theoretical solutions to cosmological observations exist. In particular, alone the empirical mass-discrepancy-acceleration correlation constitutes convincing evidence that galactic-scale dynamics must be Milgromian. Major problems with inflationary big bang cosmologies remain unresolved.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
2010-01-01
Recent, current, and planned NASA missions that employ blunt-body entry vehicles pose aerothermodynamic problems that challenge the state-of-the art of experimental and computational methods. The issues of boundary-layer transition and turbulent heating on the heat shield have become important in the designs of both the Mars Science Laboratory and Crew Exploration Vehicle. While considerable experience in these general areas exists, that experience is mainly derived from simple geometries; e.g. sharp-cones and flat-plates, or from lifting bodies such as the Space Shuttle Orbiter. For blunt-body vehicles, application of existing data, correlations, and comparisons is questionable because an all, or mostly, subsonic flow field is produced behind the bow shock, as compared to the supersonic (or even hypersonic) flow of other configurations. Because of the need for design and validation data for projects such as MSL and CEV, many new experimental studies have been conducted in the last decade to obtain detailed boundary-layer transition and turbulent heating data on this class of vehicle. In this paper, details of several of the test programs are reviewed. The laminar and turbulent data from these various test are shown to correlate in terms of edge-based Stanton and Reynolds number functions. Correlations are developed from the data for transition onset and turbulent heating augmentation as functions of momentum thickness Reynolds number. These correlation can be employed as engineering-level design and analysis tools.
[Physical methods and molecular biology].
Serdiuk, I N
2009-01-01
The review is devoted to the description of the current state of physical and chemical methods used for studying the structural and functional bases of living processes. Special attention is focused on the physical methods that have opened a new page in the research of the structure of biological macromolecules. They include primarily the methods of detecting and manipulating single molecules using optical and magnetic traps. New physical methods, such as two-dimensional infrared spectroscopy, fluorescence correlation spectroscopy and magnetic resonance microscopy are also analyzed briefly in the review. The path that physics and biology have passed for the latest 55 years shows that there is no single method providing all necessary information on macromolecules and their interactions. Each method provides its space-time view of the system. All physical methods are complementary. It is just complementarity that is the fundamental idea justifying the existence in practice of all physical methods, whose description is the aim of the review.
Coupling detrended fluctuation analysis of Asian stock markets
NASA Astrophysics Data System (ADS)
Wang, Qizhen; Zhu, Yingming; Yang, Liansheng; Mul, Remco A. H.
2017-04-01
This paper uses the coupling detrended fluctuation analysis (CDFA) method to investigate the multifractal characteristics of four Asian stock markets using three stock indices: stock price returns, trading volumes and the composite index. The results show that coupled correlations exist among the four stock markets and the coupled correlations have multifractal characteristics. We then use the chi square (χ2) test to identify the sources of multifractality. For the different stock indices, the contributions of a single series to multifractality are different. In other words, the contributions of each country to coupled correlations are different. The comparative analysis shows that the research on the combine effect of stock price returns and trading volumes may be more comprehensive than on an individual index. By comparing the strength of multifractality for original data with the residual errors of the vector autoregression (VAR) model, we find that the VAR model could not be used to describe the dynamics of the coupled correlations among four financial time series.
Correlative cryogenic tomography of cells using light and soft x-rays
Smith, Elizabeth A.; Cinquin, Bertrand P.; Do, Myan; McDermott, Gerry; Le Gros, Mark A.; Larabell, Carolyn A.
2013-01-01
Correlated imaging is the process of imaging a specimen with two complementary modalities, and then combining the two data sets to create a highly informative, composite view. A recent implementation of this concept has been the combination of soft x-ray tomography (SXT) with fluorescence cryogenic microscopy (FCM). SXT-FCM is used to visualize cells that are held in a near-native, cryo-preserved state. The resultant images are, therefore, highly representative of both the cellular architecture and molecular organization in vivo. SXT quantitatively visualizes the cell and sub-cellular structures; FCM images the spatial distribution of fluorescently labeled molecules. Here, we review the characteristics of SXT-FCM, and briefly discuss how this method compares with existing correlative imaging techniques. We also describe how the incorporation of a cryo-rotation stage into a cryogenic fluorescence microscope allows acquisition of fluorescence cryogenic tomography (FCT) data. FCT is optimally suited to correlation with SXT, since both techniques image the specimen in 3-D, potentially with similar, isotropic spatial resolution. PMID:24355261
Metabolite profiling in retinoblastoma identifies novel clinicopathological subgroups
Kohe, Sarah; Brundler, Marie-Anne; Jenkinson, Helen; Parulekar, Manoj; Wilson, Martin; Peet, Andrew C; McConville, Carmel M
2015-01-01
Background: Tumour classification, based on histopathology or molecular pathology, is of value to predict tumour behaviour and to select appropriate treatment. In retinoblastoma, pathology information is not available at diagnosis and only exists for enucleated tumours. Alternative methods of tumour classification, using noninvasive techniques such as magnetic resonance spectroscopy, are urgently required to guide treatment decisions at the time of diagnosis. Methods: High-resolution magic-angle spinning magnetic resonance spectroscopy (HR-MAS MRS) was undertaken on enucleated retinoblastomas. Principal component analysis and cluster analysis of the HR-MAS MRS data was used to identify tumour subgroups. Individual metabolite concentrations were determined and were correlated with histopathological risk factors for each group. Results: Multivariate analysis identified three metabolic subgroups of retinoblastoma, with the most discriminatory metabolites being taurine, hypotaurine, total-choline and creatine. Metabolite concentrations correlated with specific histopathological features: taurine was correlated with differentiation, total-choline and phosphocholine with retrolaminar optic nerve invasion, and total lipids with necrosis. Conclusions: We have demonstrated that a metabolite-based classification of retinoblastoma can be obtained using ex vivo magnetic resonance spectroscopy, and that the subgroups identified correlate with histopathological features. This result justifies future studies to validate the clinical relevance of these subgroups and highlights the potential of in vivo MRS as a noninvasive diagnostic tool for retinoblastoma patient stratification. PMID:26348444
Combining individual participant and aggregated data in a meta-analysis with correlational studies.
Pigott, Terri; Williams, Ryan; Polanin, Joshua
2012-12-01
This paper presents methods for combining individual participant data (IPD) with aggregated study level data (AD) in a meta-analysis of correlational studies. Although medical researchers have employed IPD in a wide range of studies, only a single example exists in the social sciences. New policies at the National Science Foundation requiring grantees to submit data archiving plans may increase social scientists' access to individual level data that could be combined with traditional meta-analysis. The methods presented here extend prior work on IPD to meta-analyses using correlational studies. The examples presented illustrate the synthesis of publicly available national datasets in education with aggregated study data from a meta-analysis examining the correlation of socioeconomic status measures and academic achievement. The major benefit of the inclusion of the individual level is that both within-study and between-study interactions among moderators of effect size can be estimated. Given the potential growth in data archives in the social sciences, we should see a corresponding increase in the ability to synthesize IPD and AD in a single meta-analysis, leading to a more complete understanding of how within-study and between-study moderators relate to effect size. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Multifractal behavior of an air pollutant time series and the relevance to the predictability.
Dong, Qingli; Wang, Yong; Li, Peizhi
2017-03-01
Compared with the traditional method of detrended fluctuation analysis, which is used to characterize fractal scaling properties and long-range correlations, this research provides new insight into the multifractality and predictability of a nonstationary air pollutant time series using the methods of spectral analysis and multifractal detrended fluctuation analysis. First, the existence of a significant power-law behavior and long-range correlations for such series are verified. Then, by employing shuffling and surrogating procedures and estimating the scaling exponents, the major source of multifractality in these pollutant series is found to be the fat-tailed probability density function. Long-range correlations also partly contribute to the multifractal features. The relationship between the predictability of the pollutant time series and their multifractal nature is then investigated with extended analyses from the quantitative perspective, and it is found that the contribution of the multifractal strength of long-range correlations to the overall multifractal strength can affect the predictability of a pollutant series in a specific region to some extent. The findings of this comprehensive study can help to better understand the mechanisms governing the dynamics of air pollutant series and aid in performing better meteorological assessment and management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Biggers, Mandy Sue
Using a framework for variations of classroom inquiry (National Research Council [NRC], 2000, p. 29), this study explored 40 inservice elementary teachers' planning, modification, and enactment of kit-based science curriculum materials. As part of the study, a new observation protocol was modified from an existing protocol (Practices of Science Observation Protocol [P-SOP]) to measure the amount of teacher direction in science inquiry lessons (Practices of Science Observation Protocol + Directedness [P-SOPd]). An embedded mixed methods design was employed to investigate four questions: 1. How valid and reliable is the P-SOPd? 2. In what ways do inservice elementary teachers adapt existing elementary science curriculum materials across the inquiry continuum? 3. What is the relationship between the overall quality of inquiry and variations of inquiry in elementary teachers' enacted science instruction? 4. How do inservice elementary teachers' ideas about the inquiry continuum influence their adaptation of elementary science curriculum materials? Each teacher chose three lessons from a science unit for video-recorded observation, and submitted lesson plans for the three lessons. Lesson plans and videos were scored using the P-SOPd. The scores were also compared between the two protocols to determine if a correlation existed between the level of inquiry (measured on the P-SOP) and the amount of teacher direction (measured on the P-SOPd). Findings indicated no significant differences between planned and enacted lessons for the amount of teacher direction, but a correlation existed between the level of inquiry and the amount of teacher direction. In effect, the elementary teachers taught their science curriculum materials with a high level of fidelity for both the features of inquiry and the amount of teacher direction. A smaller group of three case study teachers were followed for the school year to give a more in-depth explanation of the quantitative findings. Case study findings revealed that the teachers' science instruction was teacher-directed while their conceptions of inquiry were student-directed. This study contributes to existing research on preservice teachers' learning about the continuum (Biggers & Forbes, 2012) and inservice teachers' ideas about the five features of inquiry (Biggers & Forbes, in press).
Spatial Decomposition of Translational Water–Water Correlation Entropy in Binding Pockets
2015-01-01
A number of computational tools available today compute the thermodynamic properties of water at surfaces and in binding pockets by using inhomogeneous solvation theory (IST) to analyze explicit-solvent simulations. Such methods enable qualitative spatial mappings of both energy and entropy around a solute of interest and can also be applied quantitatively. However, the entropy estimates of existing methods have, to date, been almost entirely limited to the first-order terms in the IST’s entropy expansion. These first-order terms account for localization and orientation of water molecules in the field of the solute but not for the modification of water–water correlations by the solute. Here, we present an extension of the Grid Inhomogeneous Solvation Theory (GIST) approach which accounts for water–water translational correlations. The method involves rewriting the two-point density of water in terms of a conditional density and utilizes the efficient nearest-neighbor entropy estimation approach. Spatial maps of this second order term, for water in and around the synthetic host cucurbit[7]uril and in the binding pocket of the enzyme Factor Xa, reveal mainly negative contributions, indicating solute-induced water–water correlations relative to bulk water; particularly strong signals are obtained for sites at the entrances of cavities or pockets. This second-order term thus enters with the same, negative, sign as the first order translational and orientational terms. Numerical and convergence properties of the methodology are examined. PMID:26636620
The effects of common risk factors on stock returns: A detrended cross-correlation analysis
NASA Astrophysics Data System (ADS)
Ruan, Qingsong; Yang, Bingchan
2017-10-01
In this paper, we investigate the cross-correlations between Fama and French three factors and the return of American industries on the basis of cross-correlation statistic test and multifractal detrended cross-correlation analysis (MF-DCCA). Qualitatively, we find that the return series of Fama and French three factors and American industries were overall significantly cross-correlated based on the analysis of a statistic. Quantitatively, we find that the cross-correlations between three factors and the return of American industries were strongly multifractal, and applying MF-DCCA we also investigate the cross-correlation of industry returns and residuals. We find that there exists multifractality of industry returns and residuals. The result of correlation coefficients we can verify that there exist other factors which influence the industry returns except Fama three factors.
NASA Astrophysics Data System (ADS)
Wu, Hong; Li, Peng; Li, Yulong
2016-02-01
This paper describes the calculation method for unsteady state conditions in the secondary air systems in gas turbines. The 1D-3D-Structure coupled method was applied. A 1D code was used to model the standard components that have typical geometric characteristics. Their flow and heat transfer were described by empirical correlations based on experimental data or CFD calculations. A 3D code was used to model the non-standard components that cannot be described by typical geometric languages, while a finite element analysis was carried out to compute the structural deformation and heat conduction at certain important positions. These codes were coupled through their interfaces. Thus, the changes in heat transfer and structure and their interactions caused by exterior disturbances can be reflected. The results of the coupling method in an unsteady state showed an apparent deviation from the existing data, while the results in the steady state were highly consistent with the existing data. The difference in the results in the unsteady state was caused primarily by structural deformation that cannot be predicted by the 1D method. Thus, in order to obtain the unsteady state performance of a secondary air system more accurately and efficiently, the 1D-3D-Structure coupled method should be used.
NASA Astrophysics Data System (ADS)
Sammartano, G.; Spanò, A.
2017-09-01
Delineating accurate surface water quality levels (SWQLs) always presents a great challenge to researchers. Existing methods of assessing surface water quality only provide individual concentrations of monitoring stations without providing the overall SWQLs. Therefore, the results of existing methods are usually difficult to be understood by decision-makers. Conversely, the water quality index (WQI) can simplify surface water quality assessment process to be accessible to decision-makers. However, in most cases, the WQI reflects inaccurate SWQLs due to the lack of representative water samples. It is very challenging to provide representative water samples because this process is costly and time consuming. To solve this problem, we introduce a cost-effective method which combines the Landsat-8 imagery and artificial intelligence to develop models to derive representative water samples by correlating concentrations of ground truth water samples to satellite spectral information. Our method was validated and the correlation between concentrations of ground truth water samples and predicted concentrations from the developed models reached a high level of coefficient of determination (R2) > 0.80, which is trustworthy. Afterwards, the predicted concentrations over each pixel of the study area were used as an input to the WQI developed by the Canadian Council of Ministers of the Environment to extract accurate SWQLs, for drinking purposes, in the Saint John River. The results indicated that SWQL was observed as 67 (Fair) and 59 (Marginal) for the lower and middle basins of the river, respectively. These findings demonstrate the potential of using our approach in surface water quality management.
Reiter, Rolf; Wetzel, Martin; Hamesch, Karim; Strnad, Pavel; Asbach, Patrick; Haas, Matthias; Siegmund, Britta; Trautwein, Christian; Hamm, Bernd; Klatt, Dieter; Braun, Jürgen; Sack, Ingolf; Tzschätzsch, Heiko
2018-01-01
Although it has been known for decades that patients with alpha1-antitrypsin deficiency (AATD) have an increased risk of cirrhosis and hepatocellular carcinoma, limited data exist on non-invasive imaging-based methods for assessing liver fibrosis such as magnetic resonance elastography (MRE) and acoustic radiation force impulse (ARFI) quantification, and no data exist on 2D-shear wave elastography (2D-SWE). Therefore, the purpose of this study is to evaluate and compare the applicability of different elastography methods for the assessment of AATD-related liver fibrosis. Fifteen clinically asymptomatic AATD patients (11 homozygous PiZZ, 4 heterozygous PiMZ) and 16 matched healthy volunteers were examined using MRE and ARFI quantification. Additionally, patients were examined with 2D-SWE. A high correlation is evident for the shear wave speed (SWS) determined with different elastography methods in AATD patients: 2D-SWE/MRE, ARFI quantification/2D-SWE, and ARFI quantification/MRE (R = 0.8587, 0.7425, and 0.6914, respectively; P≤0.0089). Four AATD patients with pathologically increased SWS were consistently identified with all three methods-MRE, ARFI quantification, and 2D-SWE. The high correlation and consistent identification of patients with pathologically increased SWS using MRE, ARFI quantification, and 2D-SWE suggest that elastography has the potential to become a suitable imaging tool for the assessment of AATD-related liver fibrosis. These promising results provide motivation for further investigation of non-invasive assessment of AATD-related liver fibrosis using elastography.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Xu, Rong; Li, Li; Wang, QuanQiu
2013-01-01
Motivation: Systems approaches to studying phenotypic relationships among diseases are emerging as an active area of research for both novel disease gene discovery and drug repurposing. Currently, systematic study of disease phenotypic relationships on a phenome-wide scale is limited because large-scale machine-understandable disease–phenotype relationship knowledge bases are often unavailable. Here, we present an automatic approach to extract disease–manifestation (D-M) pairs (one specific type of disease–phenotype relationship) from the wide body of published biomedical literature. Data and Methods: Our method leverages external knowledge and limits the amount of human effort required. For the text corpus, we used 119 085 682 MEDLINE sentences (21 354 075 citations). First, we used D-M pairs from existing biomedical ontologies as prior knowledge to automatically discover D-M–specific syntactic patterns. We then extracted additional pairs from MEDLINE using the learned patterns. Finally, we analysed correlations between disease manifestations and disease-associated genes and drugs to demonstrate the potential of this newly created knowledge base in disease gene discovery and drug repurposing. Results: In total, we extracted 121 359 unique D-M pairs with a high precision of 0.924. Among the extracted pairs, 120 419 (99.2%) have not been captured in existing structured knowledge sources. We have shown that disease manifestations correlate positively with both disease-associated genes and drug treatments. Conclusions: The main contribution of our study is the creation of a large-scale and accurate D-M phenotype relationship knowledge base. This unique knowledge base, when combined with existing phenotypic, genetic and proteomic datasets, can have profound implications in our deeper understanding of disease etiology and in rapid drug repurposing. Availability: http://nlp.case.edu/public/data/DMPatternUMLS/ Contact: rxx@case.edu PMID:23828786
A complex-lamellar description of boundary layer transition
NASA Astrophysics Data System (ADS)
Kolla, Maureen Louise
Flow transition is important, in both practical and phenomenological terms. However, there is currently no method for identifying the spatial locations associated with transition, such as the start and end of intermittency. The concept of flow stability and experimental correlations have been used, however, flow stability only identifies the location where disturbances begin to grow in the laminar flow and experimental correlations can only give approximations as measuring the start and end of intermittency is difficult. Therefore, the focus of this work is to construct a method to identify the start and end of intermittency, for a natural boundary layer transition and a separated flow transition. We obtain these locations by deriving a complex-lamellar description of the velocity field that exists between a fully laminar and fully turbulent boundary condition. Mathematically, this complex-lamellar decomposition, which is constructed from the classical Darwin-Lighthill-Hawthorne drift function and the transport of enstrophy, describes the flow that exists between the fully laminar Pohlhausen equations and Prandtl's fully turbulent one seventh power law. We approximate the difference in enstrophy density between the boundary conditions using a power series. The slope of the power series is scaled by using the shape of the universal intermittency distribution within the intermittency region. We solve the complex-lamellar decomposition of the velocity field along with the slope of the difference in enstrophy density function to determine the location of the laminar and turbulent boundary conditions. Then from the difference in enstrophy density function we calculate the start and end of intermittency. We perform this calculation on a natural boundary layer transition over a flat plate for zero pressure gradient flow and for separated shear flow over a separation bubble. We compare these results to existing experimental results and verify the accuracy of our transition model.
Kell, Alison M.; Wargo, Andrew R.; Kurath, Gael
2014-01-01
Viral genotype displacement events are characterized by the replacement of a previously dominant virus genotype by a novel genotype of the same virus species in a given geographic region. We examine here the fitness of three pairs of infectious hematopoietic necrosis virus (IHNV) genotypes involved in three major genotype displacement events in Washington state over the last 30 years to determine whether increased virus fitness correlates with displacement. Fitness was assessed using in vivo assays to measure viral replication in single infection, simultaneous co-infection, and sequential superinfection in the natural host, steelhead trout. In addition, virion stability of each genotype was measured in freshwater and seawater environments at various temperatures. By these methods, we found no correlation between increased viral fitness and displacement in the field. These results suggest that other pressures likely exist in the field with important consequences for IHNV evolution.
Direct modulation of aberrant brain network connectivity through real-time NeuroFeedback.
Ramot, Michal; Kimmich, Sara; Gonzalez-Castillo, Javier; Roopchansingh, Vinai; Popal, Haroon; White, Emily; Gotts, Stephen J; Martin, Alex
2017-09-16
The existence of abnormal connectivity patterns between resting state networks in neuropsychiatric disorders, including Autism Spectrum Disorder (ASD), has been well established. Traditional treatment methods in ASD are limited, and do not address the aberrant network structure. Using real-time fMRI neurofeedback, we directly trained three brain nodes in participants with ASD, in which the aberrant connectivity has been shown to correlate with symptom severity. Desired network connectivity patterns were reinforced in real-time, without participants' awareness of the training taking place. This training regimen produced large, significant long-term changes in correlations at the network level, and whole brain analysis revealed that the greatest changes were focused on the areas being trained. These changes were not found in the control group. Moreover, changes in ASD resting state connectivity following the training were correlated to changes in behavior, suggesting that neurofeedback can be used to directly alter complex, clinically relevant network connectivity patterns.
NASA Astrophysics Data System (ADS)
Zhuang, Xiaoyang; Wei, Yu; Ma, Feng
2015-07-01
In this paper, the multifractality and efficiency degrees of ten important Chinese sectoral indices are evaluated using the methods of MF-DFA and generalized Hurst exponents. The study also scrutinizes the dynamics of the efficiency of Chinese sectoral stock market by the rolling window approach. The overall empirical findings revealed that all the sectoral indices of Chinese stock market exist different degrees of multifractality. The results of different efficiency measures have agreed on that the 300 Materials index is the least efficient index. However, they have a slight diffidence on the most efficient one. The 300 Information Technology, 300 Telecommunication Services and 300 Health Care indices are comparatively efficient. We also investigate the cross-correlations between the ten sectoral indices and WTI crude oil price based on Multifractal Detrended Cross-correlation Analysis. At last, some relevant discussions and implications of the empirical results are presented.
Strong correlation between early stage atherosclerosis and electromechanical coupling of aorta
NASA Astrophysics Data System (ADS)
Liu, X. Y.; Yan, F.; Niu, L. L.; Chen, Q. N.; Zheng, H. R.; Li, J. Y.
2016-03-01
Atherosclerosis is the underlying cause of cardiovascular diseases that are responsible for many deaths in the world, and the early diagnosis of atherosclerosis is highly desirable. The existing imaging methods, however, are not capable of detecting the early stage of atherosclerosis development due to their limited spatial resolution. Using piezoresponse force microscopy (PFM), we show that the piezoelectric response of an aortic wall increases as atherosclerosis advances, while the stiffness of the aorta shows a less evident correlation with atherosclerosis. Furthermore, we show that there is strong correlation between the coercive electric field necessary to switch the polarity of the artery and the development of atherosclerosis. Thus by measuring the electromechanical coupling of the aortic wall, it is possible to probe atherosclerosis at the early stage of its development, not only improving the spatial resolution by orders of magnitude, but also providing comprehensive quantitative information on the biomechanical properties of the artery.
Modelling small-area inequality in premature mortality using years of life lost rates
NASA Astrophysics Data System (ADS)
Congdon, Peter
2013-04-01
Analysis of premature mortality variations via standardized expected years of life lost (SEYLL) measures raises questions about suitable modelling for mortality data, especially when developing SEYLL profiles for areas with small populations. Existing fixed effects estimation methods take no account of correlations in mortality levels over ages, causes, socio-ethnic groups or areas. They also do not specify an underlying data generating process, or a likelihood model that can include trends or correlations, and are likely to produce unstable estimates for small-areas. An alternative strategy involves a fully specified data generation process, and a random effects model which "borrows strength" to produce stable SEYLL estimates, allowing for correlations between ages, areas and socio-ethnic groups. The resulting modelling strategy is applied to gender-specific differences in SEYLL rates in small-areas in NE London, and to cause-specific mortality for leading causes of premature mortality in these areas.
NASA Astrophysics Data System (ADS)
Acquisti, Claudia; Allegrini, Paolo; Bogani, Patrizia; Buiatti, Marcello; Catanese, Elena; Fronzoni, Leone; Grigolini, Paolo; Mersi, Giuseppe; Palatella, Luigi
2004-04-01
We investigate on a possible way to connect the presence of Low-Complexity Sequences (LCS) in DNA genomes and the nonstationary properties of base correlations. Under the hypothesis that these variations signal a change in the DNA function, we use a new technique, called Non-Stationarity Entropic Index (NSEI) method, and we prove that this technique is an efficient way to detect functional changes with respect to a random baseline. The remarkable aspect is that NSEI does not imply any training data or fitting parameter, the only arbitrarity being the choice of a marker in the sequence. We make this choice on the basis of biological information about LCS distributions in genomes. We show that there exists a correlation between changing the amount in LCS and the ratio of long- to short-range correlation.
CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS
Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.
2012-01-01
In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388
NASA Technical Reports Server (NTRS)
Kautz, Harold E.; Bhatt, Ramakrishna T.
1991-01-01
A technique for measuring ultrasonic velocity was used to monitor changes that occur during processing and heat treatment of a SiC/RBSM composite. Results indicated that correlations exist between the ultrasonic velocity data and elastic modulus and interfacial shear strength data determined from mechanical tests. The ultrasonic velocity data can differentiate strength. The advantages and potential of this nondestructive evaluation method for fiber reinforced ceramic matrix composite applications are discussed.
Quantifying Differential Privacy under Temporal Correlations.
Cao, Yang; Yoshikawa, Masatoshi; Xiao, Yonghui; Xiong, Li
2017-04-01
Differential Privacy (DP) has received increasing attention as a rigorous privacy framework. Many existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives, which assume that the data are independent, or that adversaries do not have knowledge of the data correlations. However, continuous generated data in the real world tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations in the context of continuous data release. First, we model the temporal correlations using Markov model and analyze the privacy leakage of a DP mechanism when adversaries have knowledge of such temporal correlations. Our analysis reveals that the privacy loss of a DP mechanism may accumulate and increase over time . We call it temporal privacy leakage . Second, to measure such privacy loss, we design an efficient algorithm for calculating it in polynomial time. Although the temporal privacy leakage may increase over time, we also show that its supremum may exist in some cases. Third, to bound the privacy loss, we propose mechanisms that convert any existing DP mechanism into one against temporal privacy leakage. Experiments with synthetic data confirm that our approach is efficient and effective.
Litsas, G; Ari-Demirkaya, A
2010-12-01
The purpose of this study was to predict the skeletal maturation status based on the assessment of cervical vertebrae from lateral cephalometric radiographs and to compare these findings with the skeletal maturity of the same individuals judged from the hand-wrist radiographs. Lateral cephalometric and left hand-wrist radiographs of 393 Caucasian children from 8 to 18 years old were evaluated. On the hand-wrist radiographs the classification of Bjork [1972] and Grave and Brown [1976] was used to assess skeletal maturity (HWSS). Cervical vertebral maturation was also evaluated on lateral cephalometric radiographs using the improved CVMS method described by Baccetti, Franchi, and McNamara [2002]. These methods were correlated using the chi-square test. The chi-square test showed that skeletal maturational values obtained by the CVMS method were significantly correlated with the skeletal values obtained from the hand-wrist analysis for both genders (p<0.05). However, gender differentiation exists in CVMS method regarding the peak of growth spurt. The results of this study show that the CVMS method was compatible with a commonly used hand-wrist analysis method. The lateral cephalometric radiograph belonging to the standard set of records would be sufficient to evaluate skeletal maturity.
Predicting protein contact map using evolutionary and physical constraints by integer programming.
Wang, Zhiyong; Xu, Jinbo
2013-07-01
Protein contact map describes the pairwise spatial and functional relationship of residues in a protein and contains key information for protein 3D structure prediction. Although studied extensively, it remains challenging to predict contact map using only sequence information. Most existing methods predict the contact map matrix element-by-element, ignoring correlation among contacts and physical feasibility of the whole-contact map. A couple of recent methods predict contact map by using mutual information, taking into consideration contact correlation and enforcing a sparsity restraint, but these methods demand for a very large number of sequence homologs for the protein under consideration and the resultant contact map may be still physically infeasible. This article presents a novel method PhyCMAP for contact map prediction, integrating both evolutionary and physical restraints by machine learning and integer linear programming. The evolutionary restraints are much more informative than mutual information, and the physical restraints specify more concrete relationship among contacts than the sparsity restraint. As such, our method greatly reduces the solution space of the contact map matrix and, thus, significantly improves prediction accuracy. Experimental results confirm that PhyCMAP outperforms currently popular methods no matter how many sequence homologs are available for the protein under consideration. http://raptorx.uchicago.edu.
Development of stock correlation networks using mutual information and financial big data.
Guo, Xue; Zhang, Hu; Tian, Tianhai
2018-01-01
Stock correlation networks use stock price data to explore the relationship between different stocks listed in the stock market. Currently this relationship is dominantly measured by the Pearson correlation coefficient. However, financial data suggest that nonlinear relationships may exist in the stock prices of different shares. To address this issue, this work uses mutual information to characterize the nonlinear relationship between stocks. Using 280 stocks traded at the Shanghai Stocks Exchange in China during the period of 2014-2016, we first compare the effectiveness of the correlation coefficient and mutual information for measuring stock relationships. Based on these two measures, we then develop two stock networks using the Minimum Spanning Tree method and study the topological properties of these networks, including degree, path length and the power-law distribution. The relationship network based on mutual information has a better distribution of the degree and larger value of the power-law distribution than those using the correlation coefficient. Numerical results show that mutual information is a more effective approach than the correlation coefficient to measure the stock relationship in a stock market that may undergo large fluctuations of stock prices.
Anti-correlated cortical networks of intrinsic connectivity in the rat brain.
Schwarz, Adam J; Gass, Natalia; Sartorius, Alexander; Risterucci, Celine; Spedding, Michael; Schenker, Esther; Meyer-Lindenberg, Andreas; Weber-Fahr, Wolfgang
2013-01-01
In humans, resting-state blood oxygen level-dependent (BOLD) signals in the default mode network (DMN) are temporally anti-correlated with those from a lateral cortical network involving the frontal eye fields, secondary somatosensory and posterior insular cortices. Here, we demonstrate the existence of an analogous lateral cortical network in the rat brain, extending laterally from anterior secondary sensorimotor regions to the insular cortex and exhibiting low-frequency BOLD fluctuations that are temporally anti-correlated with a midline "DMN-like" network comprising posterior/anterior cingulate and prefrontal cortices. The primary nexus for this anti-correlation relationship was the anterior secondary motor cortex, close to regions that have been identified with frontal eye fields in the rat brain. The anti-correlation relationship was corroborated after global signal removal, underscoring this finding as a robust property of the functional connectivity signature in the rat brain. These anti-correlated networks demonstrate strong anatomical homology to networks identified in human and monkey connectivity studies, extend the known preserved functional connectivity relationships between rodent and primates, and support the use of resting-state functional magnetic resonance imaging as a translational imaging method between rat models and humans.
Anti-Correlated Cortical Networks of Intrinsic Connectivity in the Rat Brain
Gass, Natalia; Sartorius, Alexander; Risterucci, Celine; Spedding, Michael; Schenker, Esther; Meyer-Lindenberg, Andreas; Weber-Fahr, Wolfgang
2013-01-01
Abstract In humans, resting-state blood oxygen level-dependent (BOLD) signals in the default mode network (DMN) are temporally anti-correlated with those from a lateral cortical network involving the frontal eye fields, secondary somatosensory and posterior insular cortices. Here, we demonstrate the existence of an analogous lateral cortical network in the rat brain, extending laterally from anterior secondary sensorimotor regions to the insular cortex and exhibiting low-frequency BOLD fluctuations that are temporally anti-correlated with a midline “DMN-like” network comprising posterior/anterior cingulate and prefrontal cortices. The primary nexus for this anti-correlation relationship was the anterior secondary motor cortex, close to regions that have been identified with frontal eye fields in the rat brain. The anti-correlation relationship was corroborated after global signal removal, underscoring this finding as a robust property of the functional connectivity signature in the rat brain. These anti-correlated networks demonstrate strong anatomical homology to networks identified in human and monkey connectivity studies, extend the known preserved functional connectivity relationships between rodent and primates, and support the use of resting-state functional magnetic resonance imaging as a translational imaging method between rat models and humans. PMID:23919836
The Correlation between Angle Kappa and Ocular Biometry in Koreans
Choi, Se Rang
2013-01-01
Purpose To investigate normative angle kappa data and to examine whether correlations exist between angle kappa and ocular biometric measurements (e.g., refractive error, axial length) and demographic features in Koreans. Methods Data from 436 eyes (213 males and 223 females) were analyzed in this study. The angle kappa was measured using Orbscan II. We used ocular biometric measurements, including refractive spherical equivalent, interpupillary distance and axial length, to investigate the correlations between angle kappa and ocular biometry. The IOL Master ver. 5.02 was used to obtain axial length. Results The mean patient age was 57.5 ± 12.0 years in males and 59.4 ± 12.4 years in females (p = 0.11). Angle kappa averaged 4.70 ± 2.70 degrees in men and 4.89 ± 2.14 degrees in women (p = 0.48). Axial length and spherical equivalent were correlated with angle kappa (r = -0.342 and r = 0.197, respectively). The correlation between axial length and spherical equivalent had a negative correlation (r = -0.540, p < 0.001). Conclusions Angle kappa increased with spherical equivalent and age. Thus, careful manipulation should be considered in older and hyperopic patients when planning refractive or strabismus surgery. PMID:24311927
Development of stock correlation networks using mutual information and financial big data
Guo, Xue; Zhang, Hu
2018-01-01
Stock correlation networks use stock price data to explore the relationship between different stocks listed in the stock market. Currently this relationship is dominantly measured by the Pearson correlation coefficient. However, financial data suggest that nonlinear relationships may exist in the stock prices of different shares. To address this issue, this work uses mutual information to characterize the nonlinear relationship between stocks. Using 280 stocks traded at the Shanghai Stocks Exchange in China during the period of 2014-2016, we first compare the effectiveness of the correlation coefficient and mutual information for measuring stock relationships. Based on these two measures, we then develop two stock networks using the Minimum Spanning Tree method and study the topological properties of these networks, including degree, path length and the power-law distribution. The relationship network based on mutual information has a better distribution of the degree and larger value of the power-law distribution than those using the correlation coefficient. Numerical results show that mutual information is a more effective approach than the correlation coefficient to measure the stock relationship in a stock market that may undergo large fluctuations of stock prices. PMID:29668715
NASA Astrophysics Data System (ADS)
Hincks, Adam D.; Hajian, Amir; Addison, Graeme E.
2013-05-01
We cross-correlate the 100 μm Improved Reprocessing of the IRAS Survey (IRIS) map and galaxy clusters at 0.1 < z < 0.3 in the maxBCG catalogue taken from the Sloan Digital Sky Survey, measuring an angular cross-power spectrum over multipole moments 150 < l < 3000 at a total significance of over 40σ. The cross-spectrum, which arises from the spatial correlation between unresolved dusty galaxies that make up the cosmic infrared background (CIB) in the IRIS map and the galaxy clusters, is well-fit by a single power law with an index of -1.28±0.12, similar to the clustering of unresolved galaxies from cross-correlating far-infrared and submillimetre maps at longer wavelengths. Using a recent, phenomenological model for the spectral and clustering properties of the IRIS galaxies, we constrain the large-scale bias of the maxBCG clusters to be 2.6±1.4, consistent with existing analyses of the real-space cluster correlation function. The success of our method suggests that future CIB-optical cross-correlations using Planck and Herschel data will significantly improve our understanding of the clustering and redshift distribution of the faint CIB sources.
Microarray missing data imputation based on a set theoretic framework and biological knowledge.
Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong
2006-01-01
Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.
Unconventional Tight Reservoirs Characterization with Nuclear Magnetic Resonance
NASA Astrophysics Data System (ADS)
Santiago, C. J. S.; Solatpour, R.; Kantzas, A.
2017-12-01
The increase in tight reservoir exploitation projects causes producing many papers each year on new, modern, and modified methods and techniques on estimating characteristics of these reservoirs. The most ambiguous of all basic reservoir property estimations deals with permeability. One of the logging methods that is advertised to predict permeability but is always met by skepticism is Nuclear Magnetic Resonance (NMR). The ability of NMR to differentiate between bound and movable fluids and providing porosity increased the capability of NMR as a permeability prediction technique. This leads to a multitude of publications and the motivation of a review paper on this subject by Babadagli et al. (2002). The first part of this presentation is dedicated to an extensive review of the existing correlation models for NMR based estimates of tight reservoir permeability to update this topic. On the second part, the collected literature information is used to analyze new experimental data. The data are collected from tight reservoirs from Canada, the Middle East, and China. A case study is created to apply NMR measurement in the prediction of reservoir characterization parameters such as porosity, permeability, cut-offs, irreducible saturations etc. Moreover, permeability correlations are utilized to predict permeability. NMR experiments were conducted on water saturated cores. NMR T2 relaxation times were measured. NMR porosity, the geometric mean relaxation time (T2gm), Irreducible Bulk Volume (BVI), and Movable Bulk Volume (BVM) were calculated. The correlation coefficients were computed based on multiple regression analysis. Results are cross plots of NMR permeability versus the independently measured Klinkenberg corrected permeability. More complicated equations are discussed. Error analysis of models is presented and compared. This presentation is beneficial in understanding existing tight reservoir permeability models. The results can be used as a guide for choosing the best permeability estimation model for tight reservoirs data.
NASA Astrophysics Data System (ADS)
Rak, Rafał; Drożdż, Stanisław; Kwapień, Jarosław; Oświȩcimka, Paweł
2015-11-01
We consider a few quantities that characterize trading on a stock market in a fixed time interval: logarithmic returns, volatility, trading activity (i.e., the number of transactions), and volume traded. We search for the power-law cross-correlations among these quantities aggregated over different time units from 1 min to 10 min. Our study is based on empirical data from the American stock market consisting of tick-by-tick recordings of 31 stocks listed in Dow Jones Industrial Average during the years 2008-2011. Since all the considered quantities except the returns show strong daily patterns related to the variable trading activity in different parts of a day, which are the most evident in the autocorrelation function, we remove these patterns by detrending before we proceed further with our study. We apply the multifractal detrended cross-correlation analysis with sign preserving (MFCCA) and show that the strongest power-law cross-correlations exist between trading activity and volume traded, while the weakest ones exist (or even do not exist) between the returns and the remaining quantities. We also show that the strongest cross-correlations are carried by those parts of the signals that are characterized by large and medium variance. Our observation that the most convincing power-law cross-correlations occur between trading activity and volume traded reveals the existence of strong fractal-like coupling between these quantities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smitsmans, Monique H.P.; Bois, Josien de; Sonke, Jan-Jakob
Purpose: The objectives of this study were to quantify residual interfraction displacement of seminal vesicles (SV) and investigate the efficacy of rotation correction on SV displacement in marker-based prostate image-guided radiotherapy (IGRT). We also determined the effect of marker registration on the measured SV displacement and its impact on margin design. Methods and Materials: SV displacement was determined relative to marker registration by using 296 cone beam computed tomography scans of 13 prostate cancer patients with implanted markers. SV were individually registered in the transverse plane, based on gray-value information. The target registration error (TRE) for the SV due tomore » marker registration inaccuracies was estimated. Correlations between prostate gland rotations and SV displacement and between individual SV displacements were determined. Results: The SV registration success rate was 99%. Displacement amounts of both SVs were comparable. Systematic and random residual SV displacements were 1.6 mm and 2.0 mm in the left-right direction, respectively, and 2.8 mm and 3.1 mm in the anteroposterior (AP) direction, respectively. Rotation correction did not reduce residual SV displacement. Prostate gland rotation around the left-right axis correlated with SV AP displacement (R{sup 2} = 42%); a correlation existed between both SVs for AP displacement (R{sup 2} = 62%); considerable correlation existed between random errors of SV displacement and TRE (R{sup 2} = 34%). Conclusions: Considerable residual SV displacement exists in marker-based IGRT. Rotation correction barely reduced SV displacement, rather, a larger SV displacement was shown relative to the prostate gland that was not captured by the marker position. Marker registration error partly explains SV displacement when correcting for rotations. Correcting for rotations, therefore, is not advisable when SV are part of the target volume. Margin design for SVs should take these uncertainties into account.« less
On the evaluation of segmentation editing tools
Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.
2014-01-01
Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063
A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes
NASA Astrophysics Data System (ADS)
Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh
2018-01-01
Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.
Wang, Hong-Qiang; Tsai, Chung-Jui
2013-01-01
With the rapid increase of omics data, correlation analysis has become an indispensable tool for inferring meaningful associations from a large number of observations. Pearson correlation coefficient (PCC) and its variants are widely used for such purposes. However, it remains challenging to test whether an observed association is reliable both statistically and biologically. We present here a new method, CorSig, for statistical inference of correlation significance. CorSig is based on a biology-informed null hypothesis, i.e., testing whether the true PCC (ρ) between two variables is statistically larger than a user-specified PCC cutoff (τ), as opposed to the simple null hypothesis of ρ = 0 in existing methods, i.e., testing whether an association can be declared without a threshold. CorSig incorporates Fisher's Z transformation of the observed PCC (r), which facilitates use of standard techniques for p-value computation and multiple testing corrections. We compared CorSig against two methods: one uses a minimum PCC cutoff while the other (Zhu's procedure) controls correlation strength and statistical significance in two discrete steps. CorSig consistently outperformed these methods in various simulation data scenarios by balancing between false positives and false negatives. When tested on real-world Populus microarray data, CorSig effectively identified co-expressed genes in the flavonoid pathway, and discriminated between closely related gene family members for their differential association with flavonoid and lignin pathways. The p-values obtained by CorSig can be used as a stand-alone parameter for stratification of co-expressed genes according to their correlation strength in lieu of an arbitrary cutoff. CorSig requires one single tunable parameter, and can be readily extended to other correlation measures. Thus, CorSig should be useful for a wide range of applications, particularly for network analysis of high-dimensional genomic data. A web server for CorSig is provided at http://202.127.200.1:8080/probeWeb. R code for CorSig is freely available for non-commercial use at http://aspendb.uga.edu/downloads.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors
Ma, Xiaolei; Du, Bowen; Yu, Bin
2017-01-01
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks. PMID:28934164
NASA Astrophysics Data System (ADS)
Brückner, Charlotte; Engels, Bernd
2017-01-01
Vertical and adiabatic singlet and triplet excitation energies of molecular p-type semiconductors calculated with various DFT functionals and wave-function based approaches are benchmarked against MS-CASPT2/cc-pVTZ reference values. A special focus lies on the singlet-triplet gaps that are very important in the process of singlet fission. Singlet fission has the potential to boost device efficiencies of organic solar cells, but the scope of existing singlet-fission compounds is still limited. A computational prescreening of candidate molecules could enlarge it; yet it requires efficient methods accurately predicting singlet and triplet excitation energies. Different DFT formulations (Tamm-Dancoff approximation, linear response time-dependent DFT, Δ-SCF) and spin scaling schemes along with several ab initio methods (CC2, ADC(2)/MP2, CIS(D), CIS) are evaluated. While wave-function based methods yield rather reliable singlet-triplet gaps, many DFT functionals are shown to systematically underestimate triplet excitation energies. To gain insight, the impact of exact exchange and correlation is in detail addressed.
Accurate diagnosis of thyroid follicular lesions from nuclear morphology using supervised learning.
Ozolek, John A; Tosun, Akif Burak; Wang, Wei; Chen, Cheng; Kolouri, Soheil; Basu, Saurav; Huang, Hu; Rohde, Gustavo K
2014-07-01
Follicular lesions of the thyroid remain significant diagnostic challenges in surgical pathology and cytology. The diagnosis often requires considerable resources and ancillary tests including immunohistochemistry, molecular studies, and expert consultation. Visual analyses of nuclear morphological features, generally speaking, have not been helpful in distinguishing this group of lesions. Here we describe a method for distinguishing between follicular lesions of the thyroid based on nuclear morphology. The method utilizes an optimal transport-based linear embedding for segmented nuclei, together with an adaptation of existing classification methods. We show the method outputs assignments (classification results) which are near perfectly correlated with the clinical diagnosis of several lesion types' lesions utilizing a database of 94 patients in total. Experimental comparisons also show the new method can significantly outperform standard numerical feature-type methods in terms of agreement with the clinical diagnosis gold standard. In addition, the new method could potentially be used to derive insights into biologically meaningful nuclear morphology differences in these lesions. Our methods could be incorporated into a tool for pathologists to aid in distinguishing between follicular lesions of the thyroid. In addition, these results could potentially provide nuclear morphological correlates of biological behavior and reduce health care costs by decreasing histotechnician and pathologist time and obviating the need for ancillary testing. Copyright © 2014 Elsevier B.V. All rights reserved.
Evaluation of the TEOM method for measurement of ambient particulate mass in urban areas.
Allen, G; Sioutas, C; Koutrakis, P; Reiss, R; Lurmann, F W; Roberts, P T
1997-06-01
Increased interest in the health effects of ambient particulate mass (PM) has focused attention on the evaluation of existing mass measurement methodologies and the definition of PM in ambient air. The Rupprecht and Patashnick Tapered Element Oscillating MicroBalance (TEOM) method for PM is compared with time-integrated gravimetric (manual) PM methods in large urban areas during different seasons. Comparisons are conducted for both PM10 and PM2.5 concentrations. In urban areas, a substantial fraction of ambient PM can be semi-volatile material. A larger fraction of this component of PM10 may be lost from the TEOM-heated filter than the Federal Reference Method (FRM). The observed relationship between TEOM and FRM methods varied widely among sites and seasons. In East Coast urban areas during the summer, the methods were highly correlated with good agreement. In the winter, correlation was somewhat lower, with TEOM PM concentrations generally lower than the FRM. Rubidoux, CA, and two Mexican sites (Tlalnepantla and Merced) had the highest levels of PM10 and the largest difference between TEOM and manual methods. PM2.5 data from collocation of 24-hour manual samples with the TEOM are also presented. As most of the semi-volatile PM is in the fine fraction, differences between these methods are larger for PM2.5 than for PM10.
Self-calibrated correlation imaging with k-space variant correlation functions.
Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J
2018-03-01
Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Hierarchical detection of red lesions in retinal images by multiscale correlation filtering
NASA Astrophysics Data System (ADS)
Zhang, Bob; Wu, Xiangqian; You, Jane; Li, Qin; Karray, Fakhri
2009-02-01
This paper presents an approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR) -- a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since red lesions are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on Multiscale Correlation Filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, Red Lesion Candidate Detection (coarse level) and True Red Lesion Detection (fine level). The approach was evaluated using data from Retinopathy On-line Challenge (ROC) competition website and we conclude our method to be effective and efficient.
Teman, Carolin J.; Wilson, Andrew R.; Perkins, Sherrie L.; Hickman, Kimberly; Prchal, Josef T.; Salama, Mohamed E.
2010-01-01
Evaluation of bone marrow fibrosis and osteosclerosis in myeloproliferative neoplasms (MPN) is subject to interobserver inconsistency. Performance data for currently utilized fibrosis grading systems are lacking, and classification scales for osteosclerosis do not exist. Digital imaging can serve as a quantification method for fibrosis and osteosclerosis. We used digital imaging techniques for trabecular area assessment and reticulin-fiber quantification. Patients with all Philadelphia negative MPN subtypes had higher trabecular volume than controls (p ≤0.0015). Results suggest that the degree of osteosclerosis helps differentiate primary myelofibrosis from other MPN. Numerical quantification of fibrosis highly correlated with subjective scores, and interobserver correlation was satisfactory. Digital imaging provides accurate quantification for osteosclerosis and fibrosis. PMID:20122729
Joint temporal density measurements for two-photon state characterization.
Kuzucu, Onur; Wong, Franco N C; Kurimura, Sunao; Tovstonog, Sergey
2008-10-10
We demonstrate a technique for characterizing two-photon quantum states based on joint temporal correlation measurements using time-resolved single-photon detection by femtosecond up-conversion. We measure for the first time the joint temporal density of a two-photon entangled state, showing clearly the time anticorrelation of the coincident-frequency entangled photon pair generated by ultrafast spontaneous parametric down-conversion under extended phase-matching conditions. The new technique enables us to manipulate the frequency entanglement by varying the down-conversion pump bandwidth to produce a nearly unentangled two-photon state that is expected to yield a heralded single-photon state with a purity of 0.88. The time-domain correlation technique complements existing frequency-domain measurement methods for a more complete characterization of photonic entanglement.
Array tomography of physiologically-characterized CNS synapses.
Valenzuela, Ricardo A; Micheva, Kristina D; Kiraly, Marianna; Li, Dong; Madison, Daniel V
2016-08-01
The ability to correlate plastic changes in synaptic physiology with changes in synaptic anatomy has been very limited in the central nervous system because of shortcomings in existing methods for recording the activity of specific CNS synapses and then identifying and studying the same individual synapses on an anatomical level. We introduce here a novel approach that combines two existing methods: paired neuron electrophysiological recording and array tomography, allowing for the detailed molecular and anatomical study of synapses with known physiological properties. The complete mapping of a neuronal pair allows determining the exact number of synapses in the pair and their location. We have found that the majority of close appositions between the presynaptic axon and the postsynaptic dendrite in the pair contain synaptic specializations. The average release probability of the synapses between the two neurons in the pair is low, below 0.2, consistent with previous studies of these connections. Other questions, such as receptor distribution within synapses, can be addressed more efficiently by identifying only a subset of synapses using targeted partial reconstructions. In addition, time sensitive events can be captured with fast chemical fixation. Compared to existing methods, the present approach is the only one that can provide detailed molecular and anatomical information of electrophysiologically-characterized individual synapses. This method will allow for addressing specific questions about the properties of identified CNS synapses, even when they are buried within a cloud of millions of other brain circuit elements. Copyright © 2016. Published by Elsevier B.V.
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Measurement of the local food environment: a comparison of existing data sources.
Bader, Michael D M; Ailshire, Jennifer A; Morenoff, Jeffrey D; House, James S
2010-03-01
Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses-drugstores, liquor stores, bars, convenience stores, restaurants, and grocers-located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design.
Impulsive noise of printers: measurement metrics and their subjective correlation
NASA Astrophysics Data System (ADS)
Baird, Terrence; Otto, Norman; Bray, Wade; Stephan, Mike
2005-09-01
In the office and home computing environments, printer impulsive noise has become a significant contributor to user perceived quality or lack thereof, and can affect the user's comfort level and ability to concentrate. Understanding and quantifying meaningful metrics for printer impulsivity is becoming an increasingly important goal for printer manufacturers. Several methods exist in international standards for measuring the impulsivity of noise. For information technology equipment (ITE), the method for detection of impulsive noise is provided in ECMA-74 and ISO 7779. However, there is a general acknowledgement that the current standard method of determining impulsivity by simply measuring A-weighted sound pressure level (SPL) with the impulsive time weighting, I, applied is inadequate to characterize impulsive noise and ultimately to predict user satisfaction and acceptance. In recent years, there has been a variety of new measurement methods evaluated for impulsive noise for both environmental and machinery noise. This paper reviews several of the available metrics, applies the metrics to several printer impulsive noise sources, and makes an initial assessment of their correlation to the subjective impressions of users. It is a review and continuation of the work presented at InterNoise 2005 (Baird, Bray, and Otto).
A diagram retrieval method with multi-label learning
NASA Astrophysics Data System (ADS)
Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi
2015-01-01
In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
An algorithm for spatial heirarchy clustering
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Velasco, F. R. D.
1981-01-01
A method for utilizing both spectral and spatial redundancy in compacting and preclassifying images is presented. In multispectral satellite images, a high correlation exists between neighboring image points which tend to occupy dense and restricted regions of the feature space. The image is divided into windows of the same size where the clustering is made. The classes obtained in several neighboring windows are clustered, and then again successively clustered until only one region corresponding to the whole image is obtained. By employing this algorithm only a few points are considered in each clustering, thus reducing computational effort. The method is illustrated as applied to LANDSAT images.
Accurate and Efficient Approximation to the Optimized Effective Potential for Exchange
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Kananenka, Alexei A.; Staroverov, Viktor N.
2013-07-01
We devise an efficient practical method for computing the Kohn-Sham exchange-correlation potential corresponding to a Hartree-Fock electron density. This potential is almost indistinguishable from the exact-exchange optimized effective potential (OEP) and, when used as an approximation to the OEP, is vastly better than all existing models. Using our method one can obtain unambiguous, nearly exact OEPs for any reasonable finite one-electron basis set at the same low cost as the Krieger-Li-Iafrate and Becke-Johnson potentials. For all practical purposes, this solves the long-standing problem of black-box construction of OEPs in exact-exchange calculations.
Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile
NASA Astrophysics Data System (ADS)
Halverson, Thomas; Poirier, Bill
2015-03-01
'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.
Prasad, Krishna D.; Shah, Namrata; Hegde, Chethan
2012-01-01
Purpose: To evaluate the correlation between sagittal condylar guidance obtained by protrusive interocclusal records and panoramic radiograph tracing methods in human dentulous subjects. Materials and Methods: The sagittal condylar guidance was determined in 75 dentulous subjects by protrusive interocclusal records using Aluwax through a face bow transfer (HANAU™ Spring Bow, Whip Mix Corporation, USA) to a semi-adjustable articulator (HANAU™ Wide-Vue Articulator, Whip Mix Corporation, USA). In the same subjects, the sagittal outline of the articular eminence and glenoid fossa was traced in panoramic radiographs. The sagittal condylar path inclination was constructed by joining the heights of curvature in the glenoid fossa and the corresponding articular eminence. This was then related to the constructed Frankfurt's horizontal plane to determine the radiographic angle of sagittal condylar guidance. Results: A strong positive correlation existed between right and left condylar guidance by the protrusive interocclusal method (P 0.000) and similarly by the radiographic method (P 0.013). The mean difference between the condylar guidance obtained using both methods were 1.97° for the right side and 3.18° for the left side. This difference between the values by the two methods was found to be highly significant for the right (P 0.003) and left side (P 0.000), respectively. The sagittal condylar guidance obtained from both methods showed a significant positive correlation on right (P 0.000) and left side (P 0.015), respectively. Conclusion: Panoramic radiographic tracings of the sagittal condylar path guidance may be made relative to the Frankfurt's horizontal reference plane and the resulting condylar guidance angles used to set the condylar guide settings of semi-adjustable articulators. PMID:23633793
Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.
2016-01-01
Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, William A., E-mail: wadawson@ucdavis.edu
2013-08-01
Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less
Seyedmahmoud, Rasoul; Rainer, Alberto; Mozetic, Pamela; Maria Giannitelli, Sara; Trombetta, Marcella; Traversa, Enrico; Licoccia, Silvia; Rinaldi, Antonio
2015-01-01
Tissue engineering scaffolds produced by electrospinning are of enormous interest, but still lack a true understanding about the fundamental connection between the outstanding functional properties, the architecture, the mechanical properties, and the process parameters. Fragmentary results from several parametric studies only render some partial insights that are hard to compare and generally miss the role of parameters interactions. To bridge this gap, this article (Part-1 of 2) features a case study on poly-L-lactide scaffolds to demonstrate how statistical methods such as design of experiments can quantitatively identify the correlations existing between key scaffold properties and control parameters, in a systematic, consistent, and comprehensive manner disentangling main effects from interactions. The morphological properties (i.e., fiber distribution and porosity) and mechanical properties (Young's modulus) are "charted" as a function of molecular weight (MW) and other electrospinning process parameters (the Xs), considering the single effect as well as interactions between Xs. For the first time, the major role of the MW emerges clearly in controlling all scaffold properties. The correlation between mechanical and morphological properties is also addressed. © 2014 Wiley Periodicals, Inc.
Colen, Rivka; Foster, Ian; Gatenby, Robert; Giger, Mary Ellen; Gillies, Robert; Gutman, David; Heller, Matthew; Jain, Rajan; Madabhushi, Anant; Madhavan, Subha; Napel, Sandy; Rao, Arvind; Saltz, Joel; Tatum, James; Verhaak, Roeland; Whitman, Gary
2014-10-01
The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.
Inability of the entropy vector method to certify nonclassicality in linelike causal structures
NASA Astrophysics Data System (ADS)
Weilenmann, Mirjam; Colbeck, Roger
2016-10-01
Bell's theorem shows that our intuitive understanding of causation must be overturned in light of quantum correlations. Nevertheless, quantum mechanics does not permit signaling and hence a notion of cause remains. Understanding this notion is not only important at a fundamental level, but also for technological applications such as key distribution and randomness expansion. It has recently been shown that a useful way to decide which classical causal structures could give rise to a given set of correlations is to use entropy vectors. These are vectors whose components are the entropies of all subsets of the observed variables in the causal structure. The entropy vector method employs causal relationships among the variables to restrict the set of possible entropy vectors. Here, we consider whether the same approach can lead to useful certificates of nonclassicality within a given causal structure. Surprisingly, we find that for a family of causal structures that includes the usual bipartite Bell structure they do not. For all members of this family, no function of the entropies of the observed variables gives such a certificate, in spite of the existence of nonclassical correlations. It is therefore necessary to look beyond entropy vectors to understand cause from a quantum perspective.
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The relationship between physical fitness and academic achievement among adolescent in South Korea.
Han, Gun-Soo
2018-04-01
[Purpose] The purpose of this study was to identify the relationship between physical fitness level and academic achievement in middle school students. [Subjects and Methods] A total of 236 students aged 13-15 from three middle schools in D city, South Korea, were selected using a random sampling method. Academic achievement was measured by students' 2014 fall-semester final exam scores and the level of physical fitness was determined according to the PAPS (Physical Activity Promotion System) score administrated by the Korean Ministry of Education. A Pearson correlation test with SPSS 20.0 was employed. [Results] The Pearson correlation test revealed a significant correlation between physical fitness and academic achievement. Specifically, students with higher levels of physical fitness tend to have higher academic performance. In addition, final exam scores of core subjects (e.g., English, mathematics, and science) were significantly related to the PAPS score. [Conclusion] Results of this study can be used to develop more effective physical education curricula. In addition, the data can also be applied to recreation and sport programs for other populations (e.g., children and adult) as well as existing national physical fitness data in various countries.
Rapid and direct screening of H:C ratio in Archean kerogen via microRaman Spectroscopy
NASA Astrophysics Data System (ADS)
Ferralis, N.; Matys, E. D.; Allwood, A.; Knoll, A. H.; Summons, R. E.
2015-12-01
Rapid evaluation of the preservation of biosignatures in ancient kerogens is essential for the evaluation of the usability of Earth analogues as proxies of Martian geological materials. No single, non-destructive and non-invasive technique currently exists to rapidly determine such state of preservation of the organic matter in relation to its geological and mineral environment. Due to its non-invasive nature, microRaman spectroscopy is emerging as a candidate technique for the qualitative determination maturity of organic matter, by correlating Raman spectral features and aromatic carbon cluster size. Here we will present a novel quantitative method in which before-neglected Raman spectral features are correlated directly and with excellent accuracy with the H:C ratio. In addition to providing a chemical justification of the found direct correlation, we will show its applicability and predictive capabilities in evaluating H:C in Archean kerogens. This novel method opens new opportunities for the use of Raman spectroscopy and mapping. This includes the non-invasively determination of kerogen preservation and microscale chemical diversity within a particular Earth analogue, to be potentially extended to evaluate Raman spectra acquired directly on Mars.
Intensity-based masking: A tool to improve functional connectivity results of resting-state fMRI.
Peer, Michael; Abboud, Sami; Hertz, Uri; Amedi, Amir; Arzy, Shahar
2016-07-01
Seed-based functional connectivity (FC) of resting-state functional MRI data is a widely used methodology, enabling the identification of functional brain networks in health and disease. Based on signal correlations across the brain, FC measures are highly sensitive to noise. A somewhat neglected source of noise is the fMRI signal attenuation found in cortical regions in close vicinity to sinuses and air cavities, mainly in the orbitofrontal, anterior frontal and inferior temporal cortices. BOLD signal recorded at these regions suffers from dropout due to susceptibility artifacts, resulting in an attenuated signal with reduced signal-to-noise ratio in as many as 10% of cortical voxels. Nevertheless, signal attenuation is largely overlooked during FC analysis. Here we first demonstrate that signal attenuation can significantly influence FC measures by introducing false functional correlations and diminishing existing correlations between brain regions. We then propose a method for the detection and removal of the attenuated signal ("intensity-based masking") by fitting a Gaussian-based model to the signal intensity distribution and calculating an intensity threshold tailored per subject. Finally, we apply our method on real-world data, showing that it diminishes false correlations caused by signal dropout, and significantly improves the ability to detect functional networks in single subjects. Furthermore, we show that our method increases inter-subject similarity in FC, enabling reliable distinction of different functional networks. We propose to include the intensity-based masking method as a common practice in the pre-processing of seed-based functional connectivity analysis, and provide software tools for the computation of intensity-based masks on fMRI data. Hum Brain Mapp 37:2407-2418, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Relationship between pore geometric characteristics and SIP/NMR parameters observed for mudstones
NASA Astrophysics Data System (ADS)
Robinson, J.; Slater, L. D.; Keating, K.; Parker, B. L.; Robinson, T.
2017-12-01
The reliable estimation of permeability remains one of the most challenging problems in hydrogeological characterization. Cost effective, non-invasive geophysical methods such as spectral induced polarization (SIP) and nuclear magnetic resonance (NMR) offer an alternative to traditional sampling methods as they are sensitive to the mineral surfaces and pore spaces that control permeability. We performed extensive physical characterization, SIP and NMR geophysical measurements on fractured rock cores extracted from a mudstone site in an effort to compare 1) the pore size characterization determined from traditional and geophysical methods and 2) the performance of permeability models based on these methods. We focus on two physical characterizations that are well-correlated with hydraulic properties: the pore volume normalized surface area (Spor) and an interconnected pore diameter (Λ). We find the SIP polarization magnitude and relaxation time are better correlated with Spor than Λ, the best correlation of these SIP measures for our sample dataset was found with Spor divided by the electrical formation factor (F). NMR parameters are, similarly, better correlated with Spor than Λ. We implement previously proposed mechanistic and empirical permeability models using SIP and NMR parameters. A sandstone-calibrated SIP model using a polarization magnitude does not perform well while a SIP model using a mean relaxation time performs better in part by more sufficiently accounting for the effects of fluid chemistry. A sandstone-calibrated NMR permeability model using an average measure of the relaxation time does not perform well, presumably due to small pore sizes which are either not connected or contain water of limited mobility. An NMR model based on the laboratory determined portions of the bound versus mobile portions of the relaxation distribution performed reasonably well. While limitations exist, there are many opportunities to use geophysical data to predict permeability in mudstone formations.
Internal and External Match Loads of University-Level Soccer Players: A Comparison Between Methods.
Sparks, Martinique; Coetzee, Ben; Gabbett, Tim J
2017-04-01
Sparks, M, Coetzee, B, and Gabbett, TJ. Internal and external match loads of university-level soccer players: a comparison between methods. J Strength Cond Res 31(4): 1072-7077, 2017-The aim of this study was to use individualized intensity zones to compare the external (velocity and player load, PL) and internal loads (heart rate, HR) of a cohort of university-level soccer players. Thirteen soccer players completed a 40-m maximum speed test and the Yo-Yo intermittent recovery test 1 (Yo-Yo IR1) to determine individualized velocity and HR thresholds. Heart rate values and global positioning system (GPS) data of each player were recorded during 5 league matches. A large (r = 0.46; p ≤ 0.01) correlation was found between time spent in the low-intensity (LI) velocity zone (LIVZ) and the LI HR zone. Similarly, there were moderate (r = 0.25; p ≤ 0.01) to large (r = 0.57; p ≤ 0.01) correlations between the relative and absolute time spent in the moderate-intensity (MI) velocity zone (MIVZ) and the MI HR zone. No significant correlations (p ≤ 0.01) existed between the high-intensity (HI) velocity zones (HIVZ) and the HI HR zone. On the other hand, PL showed significant correlations with all velocity and HR (absolute and relative) variables, with the exception of a nonsignificant correlation between the HI HR variables and PL. To conclude, PL showed good correlations with both velocity and HR zones and therefore may have the potential to serve as a good indicator of both external and internal soccer match loads.
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.
Cavitation in liquid cryogens. 2: Hydrofoil
NASA Technical Reports Server (NTRS)
Hord, J.
1973-01-01
Boundary layer principles, along with two-phase concepts, are used to improve existing correlative theory for developed cavity data. Details concerning cavity instrumentation, data analysis, correlative techniques, and experimental and theoretical aspects of a cavitating hydrofoil are given. Both desinent and thermodynamic data, using liquid hydrogen and liquid nitrogen, are reported. The thermodynamic data indicated that stable thermodynamic equilibrium exists throughout the vaporous cryogen cavities. The improved correlative formulas were used to evaluate these data. A new correlating parameter based on consideration of mass limiting two-phase flow flux across the cavity interface, is proposed. This correlating parameter appears attractive for future correlative and predictive applications. Agreement between theory and experiment is discussed, and directions for future analysis are suggested. The front half of the cavities, developed on the hydrofoil, may be considered as parabolically shaped.
Kakati, Tulika; Kashyap, Hirak; Bhattacharyya, Dhruba K
2016-11-30
There exist many tools and methods for construction of co-expression network from gene expression data and for extraction of densely connected gene modules. In this paper, a method is introduced to construct co-expression network and to extract co-expressed modules having high biological significance. The proposed method has been validated on several well known microarray datasets extracted from a diverse set of species, using statistical measures, such as p and q values. The modules obtained in these studies are found to be biologically significant based on Gene Ontology enrichment analysis, pathway analysis, and KEGG enrichment analysis. Further, the method was applied on an Alzheimer's disease dataset and some interesting genes are found, which have high semantic similarity among them, but are not significantly correlated in terms of expression similarity. Some of these interesting genes, such as MAPT, CASP2, and PSEN2, are linked with important aspects of Alzheimer's disease, such as dementia, increase cell death, and deposition of amyloid-beta proteins in Alzheimer's disease brains. The biological pathways associated with Alzheimer's disease, such as, Wnt signaling, Apoptosis, p53 signaling, and Notch signaling, incorporate these interesting genes. The proposed method is evaluated in regard to existing literature.
Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach
Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei
2016-01-01
Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795
Kakati, Tulika; Kashyap, Hirak; Bhattacharyya, Dhruba K.
2016-01-01
There exist many tools and methods for construction of co-expression network from gene expression data and for extraction of densely connected gene modules. In this paper, a method is introduced to construct co-expression network and to extract co-expressed modules having high biological significance. The proposed method has been validated on several well known microarray datasets extracted from a diverse set of species, using statistical measures, such as p and q values. The modules obtained in these studies are found to be biologically significant based on Gene Ontology enrichment analysis, pathway analysis, and KEGG enrichment analysis. Further, the method was applied on an Alzheimer’s disease dataset and some interesting genes are found, which have high semantic similarity among them, but are not significantly correlated in terms of expression similarity. Some of these interesting genes, such as MAPT, CASP2, and PSEN2, are linked with important aspects of Alzheimer’s disease, such as dementia, increase cell death, and deposition of amyloid-beta proteins in Alzheimer’s disease brains. The biological pathways associated with Alzheimer’s disease, such as, Wnt signaling, Apoptosis, p53 signaling, and Notch signaling, incorporate these interesting genes. The proposed method is evaluated in regard to existing literature. PMID:27901073
Application of Fourier transforms for microwave radiometric inversions
NASA Technical Reports Server (NTRS)
Holmes, J. J.; Balanis, C. A.; Truman, W. M.
1975-01-01
Existing microwave radiometer technology now provides a suitable method for remote determination of the ocean surface's absolute brightness temperature. To extract the brightness temperature of the water from the antenna temperature, an unstable Fredholm integral equation of the first kind is solved. Fourier transform techniques are used to invert the integral after it is placed into a cross correlation form. Application and verification of the methods to a two-dimensional modeling of a laboratory wave tank system are included. The instability of the ill-posed Fredholm equation is examined and a restoration procedure is included which smooths the resulting oscillations. With the recent availability and advances of fast Fourier transform (FFT) techniques, the method presented becomes very attractive in the evaluation of large quantities of data.
Ultra-High Density Holographic Memory Module with Solid-State Architecture
NASA Technical Reports Server (NTRS)
Markov, Vladimir B.
2000-01-01
NASA's terrestrial. space, and deep-space missions require technology that allows storing. retrieving, and processing a large volume of information. Holographic memory offers high-density data storage with parallel access and high throughput. Several methods exist for data multiplexing based on the fundamental principles of volume hologram selectivity. We recently demonstrated that a spatial (amplitude-phase) encoding of the reference wave (SERW) looks promising as a way to increase the storage density. The SERW hologram offers a method other than traditional methods of selectivity, such as spatial de-correlation between recorded and reconstruction fields, In this report we present the experimental results of the SERW-hologram memory module with solid-state architecture, which is of particular interest for space operations.
The validation of the visual analogue scale for patient satisfaction after total hip arthroplasty.
Brokelman, Roy B G; Haverkamp, Daniel; van Loon, Corné; Hol, Annemiek; van Kampen, Albert; Veth, Rene
2012-06-01
INTRODUCTION: Patient satisfaction becomes more important in our modern health care system. The assessment of satisfaction is difficult because it is a multifactorial item for which no golden standard exists. One of the potential methods of measuring satisfaction is by using the well-known visual analogue scale (VAS). In this study, we validated VAS for satisfaction. PATIENT AND METHODS: In this prospective study, we studied 147 patients (153 hips). The construct validity was measured using the Spearman correlation test that compares the satisfaction VAS with the Harris hip score, pain VAS at rest and during activity, Oxford hip score, Short Form 36 and Western Ontario McMaster Universities Osteoarthritis Index. The reliability was tested using the intra-class coefficient. RESULTS: The Pearson correlation test showed correlations in the range of 0.40-0.80. The satisfaction VAS had a high correlation between the pain VAS and Oxford hip score, which could mean that pain is one of the most important factors in patient satisfaction. The intra-class coefficient was 0.95. CONCLUSIONS: There is a moderate to mark degree of correlation between the satisfaction VAS and the currently available subjective and objective scoring systems. The intra-class coefficient of 0.95 indicates an excellent test-retest reliability. The VAS satisfaction is a simple instrument to quantify the satisfaction of a patient after total hip arthroplasty. In this study, we showed that the satisfaction VAS has a good validity and reliability.
Optimal Alignment of Structures for Finite and Periodic Systems.
Griffiths, Matthew; Niblett, Samuel P; Wales, David J
2017-10-10
Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.
Automatic Detection and Vulnerability Analysis of Areas Endangered by Heavy Rain
NASA Astrophysics Data System (ADS)
Krauß, Thomas; Fischer, Peter
2016-08-01
In this paper we present a new method for fully automatic detection and derivation of areas endangered by heavy rainfall based only on digital elevation models. Tracking news show that the majority of occuring natural hazards are flood events. So already many flood prediction systems were developed. But most of these existing systems for deriving areas endangered by flooding events are based only on horizontal and vertical distances to existing rivers and lakes. Typically such systems take not into account dangers arising directly from heavy rain events. In a study conducted by us together with a german insurance company a new approach for detection of areas endangered by heavy rain was proven to give a high correlation of the derived endangered areas and the losses claimed at the insurance company. Here we describe three methods for classification of digital terrain models and analyze their usability for automatic detection and vulnerability analysis for areas endangered by heavy rainfall and analyze the results using the available insurance data.
Sharma, Reena; Kashyap, Nilotpol; Prajapati, Deepesh; Kappadi, Damodar; Wadhwa, Saakshe; Gandotra, Shina; Yadav, Poonam
2016-01-01
Introduction Chewing Side Preference (CSP) is said to occur when mastication is recognized exclusively/consistently or predominantly on the same side of the jaw. It can be assessed by using the direct method - visual observation and indirect methods by electric programs, such as cinematography, kinetography and computerized electromyography. Aim The present study was aimed at evaluating the prevalence of CSP in deciduous, mixed and permanent dentitions and relating its association with dental caries. Materials and Methods In a cross-sectional observational study, 240 school going children aged 3 to 18years were randomly allocated to three experimental groups according to the deciduous dentition, mixed dentition and permanent dentition period. The existence of a CSP was determined using a direct method by asking the children to chew on a piece of gum (trident sugarless). The Mann Whitney U-test was used to compare the CSP and also among the boys and girls. The Spearman’s Correlation Coefficient was used to correlate CSP and dental caries among the three study groups and also among the groups. Results CSP was observed in 69%, 83% and 76% of children with primary, mixed and permanent dentition respectively (p>0.05). There was no statistically significant association between the presence of CSP and dental caries among the three study groups. Conclusion There was a weak or no correlation between gender and distribution of CSP and between presence of CSP and dental caries. PMID:27790569
Langevad, Line; Madsen, Camilla Gøbel; Siebner, Hartwig; Garde, Ellen
2014-11-10
The pineal gland (CP) is located centrally in the brain and produces melatonin. Cysts and concrements are frequent findings on MRI but their significance is still unclear. The visualization of CP is difficult due to its location and surrounding structures and so far, no standardized method exists. New studies suggest a correlation between CP-morphology and melatonin secretion as well as a connection between melatonin, disturbed circadian rhythm, and the development of cancer and cardiovascular diseases, underlining the need for a standardized approach to CP on MRI.
NASA Astrophysics Data System (ADS)
Arief, I. S.; Suherman, I. H.; Wardani, A. Y.; Baidowi, A.
2017-05-01
Control and monitoring system is a continuous process of securing the asset in the Marine Current Renewable Energy. A control and monitoring system is existed each critical components which is embedded in Failure Mode Effect Analysis (FMEA) method. As the result, the process in this paper developed through a matrix sensor. The matrix correlated to critical components and monitoring system which supported by sensors to conduct decision-making.
NASA Astrophysics Data System (ADS)
Ullah, Saif; Zhang, Wei; Hansen, Poul Erik
2010-07-01
Secondary deuterium isotope effects on 13C and 15N nuclear shieldings in a series of cyclic enamino-diesters and enamino-esters and acyclic enaminones and enamino-esters have been examined and analysed using NMR and DFT (B3LYP/6-31G(d,p)) methods. One-dimensional and two-dimensional NMR spectra of enaminocarbonyl and their deuterated analogues were recorded in CDCl 3 and CD 2Cl 2 at variable temperatures and assigned. 1JNH coupling constants for the derivatives of Meldrum's and tetronic acids reveal that they exist at the NH-form. It was demonstrated that deuterium isotope effects, for the hydrogen bonded compounds, due to the deuterium substitution at the nitrogen nucleus lead to large one-bond isotope effects at nitrogen, 1Δ 15N(D), and two-bond isotope effects on carbon nuclei, 2ΔC(ND), respectively. A linear correlations exist between 2ΔC(ND) and 1Δ 15N(D) whereas the correlation with δNH is divided into two. A good agreement between the experimentally observed 2ΔC(ND) and calculated dσ 13C/dR NH was obtained. A very good correlation between calculated NH bond lengths and observed NH chemical shifts is found. The observed isotope effects are shown to depend strongly on Resonance Assisted Hydrogen bonding.
Study of TEC and foF2 with the Help of GPS and Ionosonde Data over Maitri, Antarctica
NASA Astrophysics Data System (ADS)
Khatarkar, Prakash; Gwal, Ashok Kumar
Prakash Khatarkar, Purusottam Bhaware, Azad Ahmad Mansoori, Varsha Kachneria, Shweta Thakur, and A. K. Gwal Abstract The behavior of ionosphere can be diagnosed by a number of techniques. The common techniques used are the space based Global Positioning System and the ground based Ionosonde. We have compared the variability of ionospheric parameters by using two different techniques GPS and Ionosonde, during December 2009 to November 2010 at the Indian base station Maitri (11.45E, 70.45S). The comparison between the measurements of two techniques was realized through the Total Electron Content (TEC) parameters derived by using different methods. The comparison was made diurnally, seasonally, polar day and polar night variations and the annually. From our analysis we found that a strong correlation exists between the GPS derived TEC and Ionosonde derived foF2 during the day period while during the night time the correlation is insignificant. At the same time we found that a strong correlation exists between the Ionosonde and GPS derived TEC. The pattern of variation of ionospheric parameters derived from two techniques is strikingly similar indicating that the high degree of synchronization between them. This has a practical applicability by allowing calculating the error in one technique by comparing with other. Keywords: Ionosphere, Ionosonde, GPS, foF2, TEC.
Correlating off-axis tension tests to shear modulus of wood-based panels
Edmond P. Saliklis; Robert H. Falk
2000-01-01
The weakness of existing relationships correlating off-axis modulus of elasticity E q to shear modulus G 12 for wood composite panels is demonstrated through presentation of extensive experimental data. A new relationship is proposed that performs better than existing equations found in the literature. This relationship can be manipulated to calculate the shear modulus...
Aerodynamics and performance verifications of test methods for laboratory fume cupboards.
Tseng, Li-Ching; Huang, Rong Fung; Chen, Chih-Chieh; Chang, Cheng-Ping
2007-03-01
The laser-light-sheet-assisted smoke flow visualization technique is performed on a full-size, transparent, commercial grade chemical fume cupboard to diagnose the flow characteristics and to verify the validity of several current containment test methods. The visualized flow patterns identify the recirculation areas that would inevitably exist in the conventional fume cupboards because of the fundamental configurations and structures. The large-scale vortex structures exist around the side walls, the doorsill of the cupboard and in the vicinity of the near-wake region of the manikin. The identified recirculation areas are taken as the 'dangerous' regions where the risk of turbulent dispersion of contaminants may be high. Several existing tracer gas containment test methods (BS 7258:1994, prEN 14175-3:2003 and ANSI/ASHRAE 110:1995) are conducted to verify the effectiveness of these methods in detecting the contaminant leakage. By comparing the results of the flow visualization and the tracer gas tests, it is found that the local recirculation regions are more prone to contaminant leakage because of the complex interaction between the shear layers and the smoke movement through the mechanism of turbulent dispersion. From the point of view of aerodynamics, the present study verifies that the methodology of the prEN 14175-3:2003 protocol can produce more reliable and consistent results because it is based on the region-by-region measurement and encompasses the most area of the entire recirculation zone of the cupboard. A modified test method combined with the region-by-region approach at the presence of the manikin shows substantially different results of the containment. A better performance test method which can describe an operator's exposure and the correlation between flow characteristics and the contaminant leakage properties is therefore suggested.
Total body water and lean body mass estimated by ethanol dilution
NASA Technical Reports Server (NTRS)
Loeppky, J. A.; Myhre, L. G.; Venters, M. D.; Luft, U. C.
1977-01-01
A method for estimating total body water (TBW) using breath analyses of blood ethanol content is described. Regression analysis of ethanol concentration curves permits determination of a theoretical concentration that would have existed if complete equilibration had taken place immediately upon ingestion of the ethanol; the water fraction of normal blood may then be used to calculate TBW. The ethanol dilution method is applied to 35 subjects, and comparison with a tritium dilution method of determining TBW indicates that the correlation between the two procedures is highly significant. Lean body mass and fat fraction were determined by hydrostatic weighing, and these data also prove compatible with results obtained from the ethanol dilution method. In contrast to the radioactive tritium dilution method, the ethanol dilution method can be repeated daily with its applicability ranging from diseased individuals to individuals subjected to thermal stress, strenuous exercise, water immersion, or the weightless conditions of space flights.
2013-01-01
Background The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. Results One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to “filter” redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. Conclusion We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the summary-statistic based approach. We also implement the summary-statistic test using Z-statistics from an already-published GWAS of Chronic Obstructive Pulmonary Disorder (COPD) and correlation structure obtained from HapMap. We experiment with the modification of this test because the correlation structure is assumed imperfectly known. PMID:24199751
Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph
2013-11-07
The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the summary-statistic based approach. We also implement the summary-statistic test using Z-statistics from an already-published GWAS of Chronic Obstructive Pulmonary Disorder (COPD) and correlation structure obtained from HapMap. We experiment with the modification of this test because the correlation structure is assumed imperfectly known.
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
[Immunodiagnostic methods in lupus erythematosus disseminatus].
Storch, H; Schwenke, H; Helbig, W
1975-12-01
In 27 patients with lupus erythematodes diseminatus the determinations of the LE-cells according to the macromethod (Zimmer and Hargraves) and the micromethod (Mudrik and co-workers) were compared with the demonstration of antinuclear factors according to the indirect immunofluorescence and immune enzyme technique. The sensitiveness of the two last-mentioned immunomorphological methods is somewhat larger. In these cases the size of the titre of the antinuclear factor almost always correlates positively with the number of the LE-cells. For the purpose of the initial diagnostics and the judgment of the course a morphological method cannot be renounced, since in the acute episode a high consumption of the antinuclear factor the immunological methods negatively correlate with the number of the LE-cells. The immune enzyme technique is to be recommended on account of the smaller expenditure, permanence of the preparations and high sensitiveness as alternative method of the immunofluorescence technique. In the micromethod the large variation is opposite to the advantage of the slight quantity of blood and to an always existing evaluability. Investigations of the lymphocytes of patients with lupus erythematodes disseminatus by means of the lymphocyte transformation test and the determination of the B-cells with the help of the direct immune peroxidase technique refer to the close pathogenetic connections of cellular and humoral immune reactions in this disease.
Properties of coupled-cluster equations originating in excitation sub-algebras
NASA Astrophysics Data System (ADS)
Kowalski, Karol
2018-03-01
In this paper, we discuss properties of single-reference coupled cluster (CC) equations associated with the existence of sub-algebras of excitations that allow one to represent CC equations in a hybrid fashion where the cluster amplitudes associated with these sub-algebras can be obtained by solving the corresponding eigenvalue problem. For closed-shell formulations analyzed in this paper, the hybrid representation of CC equations provides a natural way for extending active-space and seniority number concepts to provide an accurate description of electron correlation effects. Moreover, a new representation can be utilized to re-define iterative algorithms used to solve CC equations, especially for tough cases defined by the presence of strong static and dynamical correlation effects. We will also explore invariance properties associated with excitation sub-algebras to define a new class of CC approximations referred to in this paper as the sub-algebra-flow-based CC methods. We illustrate the performance of these methods on the example of ground- and excited-state calculations for commonly used small benchmark systems.
Empirical analysis of online human dynamics
NASA Astrophysics Data System (ADS)
Zhao, Zhi-Dan; Zhou, Tao
2012-06-01
Patterns of human activities have attracted increasing academic interests, since the quantitative understanding of human behavior is helpful to uncover the origins of many socioeconomic phenomena. This paper focuses on behaviors of Internet users. Six large-scale systems are studied in our experiments, including the movie-watching in Netflix and MovieLens, the transaction in Ebay, the bookmark-collecting in Delicious, and the posting in FreindFeed and Twitter. Empirical analysis reveals some common statistical features of online human behavior: (1) The total number of user's actions, the user's activity, and the interevent time all follow heavy-tailed distributions. (2) There exists a strongly positive correlation between user's activity and the total number of user's actions, and a significantly negative correlation between the user's activity and the width of the interevent time distribution. We further study the rescaling method and show that this method could to some extent eliminate the different statistics among users caused by the different activities, yet the effectiveness depends on the data sets.
Spectral Properties, Generation Order Parameters, and Luminosities for Spin-powered X-Ray Pulsars
NASA Astrophysics Data System (ADS)
Wang, Wei; Zhao, Yongheng
2004-02-01
We show the spectral properties of 15 spin-powered X-ray pulsars, and the correlation between the average power-law photon index and spin-down rate. Generation order parameters (GOPs) based on polar cap models are introduced to characterize the X-ray pulsars. We calculate three definitions of generation order parameters arising from the different effects of magnetic and electric fields on photon absorption during cascade processes, and study the relations between the GOPs and spectral properties of X-ray pulsars. There exists a possible correlation between the photon index and GOP in our pulsar sample. Furthermore, we present a method stemming from the concept of GOPs to estimate the nonthermal X-ray luminosity for spin-powered pulsars. Then X-ray luminosity is calculated in the context of our polar cap accelerator model, which is consistent with most observed X-ray pulsar data. The ratio between the X-ray luminosity estimated by our method and the pulsar's spin-down power is consistent with the LX~10-3Lsd feature.
QSdpR: Viral quasispecies reconstruction via correlation clustering.
Barik, Somsubhra; Das, Shreepriya; Vikalo, Haris
2017-12-19
RNA viruses are characterized by high mutation rates that give rise to populations of closely related genomes, known as viral quasispecies. Underlying heterogeneity enables the quasispecies to adapt to changing conditions and proliferate over the course of an infection. Determining genetic diversity of a virus (i.e., inferring haplotypes and their proportions in the population) is essential for understanding its mutation patterns, and for effective drug developments. Here, we present QSdpR, a method and software for the reconstruction of quasispecies from short sequencing reads. The reconstruction is achieved by solving a correlation clustering problem on a read-similarity graph and the results of the clustering are used to estimate frequencies of sub-species; the number of sub-species is determined using pseudo F index. Extensive tests on both synthetic datasets and experimental HIV-1 and Zika virus data demonstrate that QSdpR compares favorably to existing methods in terms of various performance metrics. Copyright © 2018 Elsevier Inc. All rights reserved.
Wideband Direction of Arrival Estimation in the Presence of Unknown Mutual Coupling
Li, Weixing; Zhang, Yue; Lin, Jianzhi; Guo, Rui; Chen, Zengping
2017-01-01
This paper investigates a subarray based algorithm for direction of arrival (DOA) estimation of wideband uniform linear array (ULA), under the presence of frequency-dependent mutual coupling effects. Based on the Toeplitz structure of mutual coupling matrices, the whole array is divided into the middle subarray and the auxiliary subarray. Then two-sided correlation transformation is applied to the correlation matrix of the middle subarray instead of the whole array. In this way, the mutual coupling effects can be eliminated. Finally, the multiple signal classification (MUSIC) method is utilized to derive the DOAs. For the condition when the blind angles exist, we refine DOA estimation by using a simple approach based on the frequency-dependent mutual coupling matrixes (MCMs). The proposed method can achieve high estimation accuracy without any calibration sources. It has a low computational complexity because iterative processing is not required. Simulation results validate the effectiveness and feasibility of the proposed algorithm. PMID:28178177
de la Rosa, Laura A; Alvarez-Parrilla, Emilio; Shahidi, Fereidoon
2011-01-12
The phenolic composition and antioxidant activity of pecan kernels and shells cultivated in three regions of the state of Chihuahua, Mexico, were analyzed. High concentrations of total extractable phenolics, flavonoids, and proanthocyanidins were found in kernels, and 5-20-fold higher concentrations were found in shells. Their concentrations were significantly affected by the growing region. Antioxidant activity was evaluated by ORAC, DPPH•, HO•, and ABTS•-- scavenging (TAC) methods. Antioxidant activity was strongly correlated with the concentrations of phenolic compounds. A strong correlation existed among the results obtained using these four methods. Five individual phenolic compounds were positively identified and quantified in kernels: ellagic, gallic, protocatechuic, and p-hydroxybenzoic acids and catechin. Only ellagic and gallic acids could be identified in shells. Seven phenolic compounds were tentatively identified in kernels by means of MS and UV spectral comparison, namely, protocatechuic aldehyde, (epi)gallocatechin, one gallic acid-glucose conjugate, three ellagic acid derivatives, and valoneic acid dilactone.
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
NASA Astrophysics Data System (ADS)
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
Patel, Chirag J
2017-01-01
Mixtures, or combinations and interactions between multiple environmental exposures, are hypothesized to be causally linked with disease and health-related phenotypes. Established and emerging molecular measurement technologies to assay the exposome , the comprehensive battery of exposures encountered from birth to death, promise a new way of identifying mixtures in disease in the epidemiological setting. In this opinion, we describe the analytic complexity and challenges in identifying mixtures associated with phenotype and disease. Existing and emerging machine-learning methods and data analytic approaches (e.g., "environment-wide association studies" [EWASs]), as well as large cohorts may enhance possibilities to identify mixtures of correlated exposures associated with phenotypes; however, the analytic complexity of identifying mixtures is immense. If the exposome concept is realized, new analytical methods and large sample sizes will be required to ascertain how mixtures are associated with disease. The author recommends documenting prevalent correlated exposures and replicated main effects prior to identifying mixtures.
Reliability and Validity of the Perspectives of Support From God Scale
Hamilton, Jill B.; Crandell, Jamie L.; Carter, J. Kameron; Lynn, Mary R.
2010-01-01
Background Existing spiritual support scales for use with cancer survivors focus on the support believed to come from a religious community, clergy, or health care providers. Objective The objective of this study was to evaluate the reliability and validity of a new measure of spiritual support believed to come from God in older Christian African American cancer survivors. Methods The Perceived Support From God Scale was administered to 317 African American cancer survivors aged 55–89 years. Psychometric evaluation involved identifying underlying factors, conducting item analysis and estimating reliability, and obtaining evidence on the relationship to other variables or the extent to which the Perceived Support From God Scale correlates with religious involvement and depression. Results The Perceived Support From God Scale consists of 15 items in two subscales (Support From God and God’s Purpose for Me). The two subscales explained 59% of the variance. Cronbach’s α coefficients were .94 and .86 for the Support From God and God’s Purpose for Me subscales, respectively. Test–retest correlations were strong, supporting the temporal stability of the instrument. Pearson’s correlations to an existing religious involvement and beliefs scale were moderate to strong. Subscale scores on Support From God were negatively correlated to depression. Discussion Initial support for reliability and validity was demonstrated for the Perceived Support From God Scale. The scale captures a facet of spirituality not emphasized in other measures. Further research is needed to evaluate the scale with persons of other racial/ethnic groups and to explore the relationship of spirituality to other outcome measures. PMID:20216012
Rokszin, Alice; Gombköto, Péter; Berényi, Antal; Márkus, Zita; Braunitzer, Gábor; Benedek, György; Nagy, Attila
2011-10-18
Recent morphological and physiological studies have suggested a strong relationship between the suprageniculate nucleus (Sg) of the posterior thalamus and the input structure of the basal ganglia, the caudate nucleus (CN) of the feline brain. Accordingly, to clarify if there is a real functional relationship between Sg and CN during visual information processing, we investigated the temporal relations of simultaneously recorded neuronal spike trains of these two structures, looking for any significant cross-correlation between the spiking of the simultaneously recorded neurons. For the purposes of statistical analysis, we used the shuffle and jittering resampling methods. Of the recorded 288 Sg-CN neuron pairs, 26 (9.2%) showed significantly correlated spontaneous activity. Nineteen pairs (6.7%) showed correlated activity during stationary visual stimulation, while 21 (7.4%) pairs during stimulus movement. There was no overlap between the neuron pairs that showed cross-correlated spontaneous activity and the pairs that synchronized their activity during visual stimulation. Thus visual stimulation seems to have been able to synchronize, and also, by other neuron pairs, desynchronize the activity of CN and Sg. In about half of the cases, the activation of Sg preceded the activation of CN by a few milliseconds, while in the other half, CN was activated earlier. Our results provide the first piece of evidence for the existence of a functional cooperation between Sg and CN. We argue that either a monosynaptic bidirectional direct connection should exist between these structures, or a common input comprising of parallel pathways synchronizing them. Copyright © 2011 Elsevier B.V. All rights reserved.
A neural network based reputation bootstrapping approach for service selection
NASA Astrophysics Data System (ADS)
Wu, Quanwang; Zhu, Qingsheng; Li, Peng
2015-10-01
With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.
Quantifying Differential Privacy under Temporal Correlations
Cao, Yang; Yoshikawa, Masatoshi; Xiao, Yonghui; Xiong, Li
2017-01-01
Differential Privacy (DP) has received increasing attention as a rigorous privacy framework. Many existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives, which assume that the data are independent, or that adversaries do not have knowledge of the data correlations. However, continuous generated data in the real world tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations in the context of continuous data release. First, we model the temporal correlations using Markov model and analyze the privacy leakage of a DP mechanism when adversaries have knowledge of such temporal correlations. Our analysis reveals that the privacy loss of a DP mechanism may accumulate and increase over time. We call it temporal privacy leakage. Second, to measure such privacy loss, we design an efficient algorithm for calculating it in polynomial time. Although the temporal privacy leakage may increase over time, we also show that its supremum may exist in some cases. Third, to bound the privacy loss, we propose mechanisms that convert any existing DP mechanism into one against temporal privacy leakage. Experiments with synthetic data confirm that our approach is efficient and effective. PMID:28883711
NASA Astrophysics Data System (ADS)
Rohrlich, Daniel
Y. Aharonov and A. Shimony both conjectured that two axioms - relativistic causality (``no superluminal signalling'') and nonlocality - so nearly contradict each other that only quantum mechanics reconciles them. Can we indeed derive quantum mechanics, at least in part, from these two axioms? No: ``PR-box'' correlations show that quantum correlations are not the most nonlocal correlations consistent with relativistic causality. Here we replace ``nonlocality'' with ``retrocausality'' and supplement the axioms of relativistic causality and retrocausality with a natural and minimal third axiom: the existence of a classical limit, in which macroscopic observables commute. That is, just as quantum mechanics has a classical limit, so must any generalization of quantum mechanics. In this limit, PR-box correlations violaterelativistic causality. Generalized to all stronger-than-quantum bipartite correlations, this result is a derivation of Tsirelson's bound (a theorem of quantum mechanics) from the three axioms of relativistic causality, retrocausality and the existence of a classical limit. Although the derivation does not assume quantum mechanics, it points to the Hilbert space structure that underlies quantum correlations. I thank the John Templeton Foundation (Project ID 43297) and the Israel Science Foundation (Grant No. 1190/13) for support.
Gaffron, Philine; Niemeier, Deb
2015-01-01
It has been shown that the location of schools near heavily trafficked roads can have detrimental effects on the health of children attending those schools. It is therefore desirable to screen both existing school locations and potential new school sites to assess either the need for remedial measures or suitability for the intended use. Current screening tools and public guidance on school siting are either too coarse in their spatial resolution for assessing individual sites or are highly resource intensive in their execution (e.g., through dispersion modeling). We propose a new method to help bridge the gap between these two approaches. Using this method, we also examine the public K-12 schools in the Sacramento Area Council of Governments Region, California (USA) from an environmental justice perspective. We find that PM2.5 emissions from road traffic affecting a school site are significantly positively correlated with the following metrics: percent share of Black, Hispanic and multi-ethnic students, percent share of students eligible for subsidized meals. The emissions metric correlates negatively with the schools’ Academic Performance Index, the share of White students and average parental education levels. Our PM2.5 metric also correlates with the traffic related, census tract level screening indicators from the California Communities Environmental Health Screening Tool and the tool’s tract level rate of asthma related emergency department visits. PMID:25679341
Interrelationship and limitations of conventional radiographic assessments of skeletal maturation.
Shim J, Jocelyne; Bogowicz, Paul; Heo, Giseon; Lagravère, Manuel O
2012-06-01
Assessments of skeletal maturation (ASM) are used by clinicians to optimize treatments for each patient. This study examines the interrelationship between and limitations of hand-wrist and cervical vertebrae maturation methods in adolescent patients. Radiographs (hand-wrist and lateral cephalometric) were obtained from patients (n=62, 11 to 17 years of age) at two-time periods (T1/T2) with time intervals of 9.75 to 16.50 months. Radiographs were scored using cervical vertebral maturation staging (CS) and Fishman's skeletal maturation indices (SMI). Functional data analysis was used to visually assess maturational changes of the cervical vertebrae. Both SMI and CS increased over the period of observation. Age was moderately correlated with SMI (0.707/0.651 at T1/T2) and mildly correlated with CS (0.431/0.314 at T1/T2). There was some evidence of gender variability in SMI. The correlations between SMI and CS were 0.513 and 0.372 at T1 and T2, respectively. Functional data analysis illustrated the difficulty in differentiating contiguous cervical stages. Discrepancies exist between both scoring methods. Further studies are needed to overcome the difficulties encountered with CS. Clinicians are advised to use ASM methods with caution in adolescent patients given the aforementioned discrepancies. Separate references for boys and girls are not required. Copyright © 2012. Published by Elsevier Masson SAS.
Tomlinson, Mathew James; Pooley, Karen; Simpson, Tracey; Newton, Thomas; Hopkisson, James; Jayaprakasan, Kannamanadias; Jayaprakasan, Rajisha; Naeem, Asad; Pridmore, Tony
2010-04-01
To determine the accuracy and precision of a novel computer-assisted sperm analysis (CASA) system by comparison with existing recommended manual methods. Prospective study using comparative measurements of sperm concentration and motility on latex beads and immotile and motile sperm. Tertiary referral fertility center with strong academic links. Sperm donors and male partners of couples attending for fertility investigations. None. Achievement of Accubead target value for high and low concentration suspensions. Repeatability as demonstrated by coefficients of variation and intraclass correlation coefficients. Correlation and limits of agreement between CASA and manual methods. The CASA measurements of latex beads and sperm concentrations demonstrated a high level of accuracy and repeatability. Repeated Accubead measurements attained the required target value (mean difference from target of 2.61% and 3.71% for high- and low-concentration suspensions, respectively) and were highly reproducible. Limits of agreement analysis suggested that manual and CASA counts compared directly could be deemed to be interchangeable. Manual and CASA motility measurements were highly correlated for grades a, b, and d but could not be deemed to be interchangeable, and manual motility estimates were consistently higher for motile sperm. The novel CASA system was able to provide semen quality measurements for sperm concentration and motility measurements which were at least as reliable as current manual methods. Copyright 2010 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Nayak, Ullal Anand; Sharma, Reena; Kashyap, Nilotpol; Prajapati, Deepesh; Kappadi, Damodar; Wadhwa, Saakshe; Gandotra, Shina; Yadav, Poonam
2016-09-01
Chewing Side Preference (CSP) is said to occur when mastication is recognized exclusively/consistently or predominantly on the same side of the jaw. It can be assessed by using the direct method - visual observation and indirect methods by electric programs, such as cinematography, kinetography and computerized electromyography. The present study was aimed at evaluating the prevalence of CSP in deciduous, mixed and permanent dentitions and relating its association with dental caries. In a cross-sectional observational study, 240 school going children aged 3 to 18years were randomly allocated to three experimental groups according to the deciduous dentition, mixed dentition and permanent dentition period. The existence of a CSP was determined using a direct method by asking the children to chew on a piece of gum (trident sugarless). The Mann Whitney U-test was used to compare the CSP and also among the boys and girls. The Spearman's Correlation Coefficient was used to correlate CSP and dental caries among the three study groups and also among the groups. CSP was observed in 69%, 83% and 76% of children with primary, mixed and permanent dentition respectively (p>0.05). There was no statistically significant association between the presence of CSP and dental caries among the three study groups. There was a weak or no correlation between gender and distribution of CSP and between presence of CSP and dental caries.
Peyrard, N; Dieckmann, U; Franc, A
2008-05-01
Models of infectious diseases are characterized by a phase transition between extinction and persistence. A challenge in contemporary epidemiology is to understand how the geometry of a host's interaction network influences disease dynamics close to the critical point of such a transition. Here we address this challenge with the help of moment closures. Traditional moment closures, however, do not provide satisfactory predictions close to such critical points. We therefore introduce a new method for incorporating longer-range correlations into existing closures. Our method is technically simple, remains computationally tractable and significantly improves the approximation's performance. Our extended closures thus provide an innovative tool for quantifying the influence of interaction networks on spatially or socially structured disease dynamics. In particular, we examine the effects of a network's clustering coefficient, as well as of new geometrical measures, such as a network's square clustering coefficients. We compare the relative performance of different closures from the literature, with or without our long-range extension. In this way, we demonstrate that the normalized version of the Bethe approximation-extended to incorporate long-range correlations according to our method-is an especially good candidate for studying influences of network structure. Our numerical results highlight the importance of the clustering coefficient and the square clustering coefficient for predicting disease dynamics at low and intermediate values of transmission rate, and demonstrate the significance of path redundancy for disease persistence.
Structural features based genome-wide characterization and prediction of nucleosome organization
2012-01-01
Background Nucleosome distribution along chromatin dictates genomic DNA accessibility and thus profoundly influences gene expression. However, the underlying mechanism of nucleosome formation remains elusive. Here, taking a structural perspective, we systematically explored nucleosome formation potential of genomic sequences and the effect on chromatin organization and gene expression in S. cerevisiae. Results We analyzed twelve structural features related to flexibility, curvature and energy of DNA sequences. The results showed that some structural features such as DNA denaturation, DNA-bending stiffness, Stacking energy, Z-DNA, Propeller twist and free energy, were highly correlated with in vitro and in vivo nucleosome occupancy. Specifically, they can be classified into two classes, one positively and the other negatively correlated with nucleosome occupancy. These two kinds of structural features facilitated nucleosome binding in centromere regions and repressed nucleosome formation in the promoter regions of protein-coding genes to mediate transcriptional regulation. Based on these analyses, we integrated all twelve structural features in a model to predict more accurately nucleosome occupancy in vivo than the existing methods that mainly depend on sequence compositional features. Furthermore, we developed a novel approach, named DLaNe, that located nucleosomes by detecting peaks of structural profiles, and built a meta predictor to integrate information from different structural features. As a comparison, we also constructed a hidden Markov model (HMM) to locate nucleosomes based on the profiles of these structural features. The result showed that the meta DLaNe and HMM-based method performed better than the existing methods, demonstrating the power of these structural features in predicting nucleosome positions. Conclusions Our analysis revealed that DNA structures significantly contribute to nucleosome organization and influence chromatin structure and gene expression regulation. The results indicated that our proposed methods are effective in predicting nucleosome occupancy and positions and that these structural features are highly predictive of nucleosome organization. The implementation of our DLaNe method based on structural features is available online. PMID:22449207
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, Sean R.; Walter, William R.
Seismic waveform correlation offers the prospect of greatly reducing event detection thresholds when compared with more conventional processing methods. Correlation is applicable for seismic events that in some sense repeat, that is they have very similar waveforms. A number of recent studies have shown that correlated seismic signals may form a significant fraction of seismicity at regional distances. For the particular case of multiple nuclear explosions at the same test site, regional distance correlation also allows very precise relative location measurements and could offer the potential to lower thresholds when multiple events exist. Using the Comprehensive Nuclear-Test-Ban Treaty (CTBT) Internationalmore » Monitoring System (IMS) seismic array at Matsushiro, Japan (MJAR), Gibbons and Ringdal (2012) were able to create a multichannel correlation detector with a very low false alarm rate and a threshold below magnitude 3.0. They did this using the 2006 or 2009 Democratic People’s Republic of Korea (DPRK) nuclear explosion as a template to search through a data stream from the same station to find a match via waveform correlation. In this paper, we extend the work of Gibbons and Ringdal (2012) and measure the correlation detection threshold at several other IMS arrays. We use this to address three main points. First, we show the IMS array station at Mina, Nevada (NVAR), which is closest to the Nevada National Security Site (NNSS), is able to detect a chemical explosion that is well under 1 ton with the right template. Second, we examine the two IMS arrays closest to the North Korean (DPRK) test site (at Ussuriysk, Russian Federation [USRK] and Wonju, Republic of Korea [KSRS]) to show that similarly low thresholds are possible when the right templates exist. We also extend the work of Schaff et al. (2012) and measure the correlation detection threshold at the nearest Global Seismic Network (GSN) three-component station (MDJ) at Mudanjiang, Heilongjiang Province, China, from the New China Digital Seismograph Network (IC). To conclude, we use these results to explore the recent claim by Zhang and Wen (2015) that the DPRK conducted “…a low-yield nuclear test…” on 12 May 2010.« less
Correlated Event-by-Event Fluctuations of Flow Harmonics in Pb-Pb Collisions at √{sN N }=2.76 TeV
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; de Souza, R. D.; Deisting, A.; Deloff, A.; Dénes, E.; Deplano, C.; Dhankher, P.; di Bari, D.; di Mauro, A.; di Nezza, P.; di Ruzza, B.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lehner, S.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; León Vargas, H.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; McDonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Moreira de Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Pan, J.; Pandey, A. K.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Šándor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Valencia Palomo, L.; Vallero, S.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, P.; Yano, S.; Yasin, Z.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.; Alice Collaboration
2016-10-01
We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus-nucleus collisions, obtained for the first time using a new analysis method based on multiparticle cumulants in mixed harmonics. This novel method is robust against systematic biases originating from nonflow effects and by construction any dependence on symmetry planes is eliminated. We demonstrate that correlations of flow harmonics exhibit a better sensitivity to medium properties than the individual flow harmonics. The new measurements are performed in Pb-Pb collisions at the center-of-mass energy per nucleon pair of √{sN N }=2.76 TeV by the ALICE experiment at the Large Hadron Collider. The centrality dependence of correlation between event-by-event fluctuations of the elliptic v2 and quadrangular v4 flow harmonics, as well as of anticorrelation between v2 and triangular v3 flow harmonics are presented. The results cover two different regimes of the initial state configurations: geometry dominated (in midcentral collisions) and fluctuation dominated (in the most central collisions). Comparisons are made to predictions from Monte Carlo Glauber, viscous hydrodynamics, ampt, and hijing models. Together with the existing measurements of the individual flow harmonics the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
Revealing the microstructure of the giant component in random graph ensembles
NASA Astrophysics Data System (ADS)
Tishby, Ido; Biham, Ofer; Katzav, Eytan; Kühn, Reimer
2018-04-01
The microstructure of the giant component of the Erdős-Rényi network and other configuration model networks is analyzed using generating function methods. While configuration model networks are uncorrelated, the giant component exhibits a degree distribution which is different from the overall degree distribution of the network and includes degree-degree correlations of all orders. We present exact analytical results for the degree distributions as well as higher-order degree-degree correlations on the giant components of configuration model networks. We show that the degree-degree correlations are essential for the integrity of the giant component, in the sense that the degree distribution alone cannot guarantee that it will consist of a single connected component. To demonstrate the importance and broad applicability of these results, we apply them to the study of the distribution of shortest path lengths on the giant component, percolation on the giant component, and spectra of sparse matrices defined on the giant component. We show that by using the degree distribution on the giant component one obtains high quality results for these properties, which can be further improved by taking the degree-degree correlations into account. This suggests that many existing methods, currently used for the analysis of the whole network, can be adapted in a straightforward fashion to yield results conditioned on the giant component.
NASA Astrophysics Data System (ADS)
Zunger, Alex; Trimarchi, Giancarlo
The existence of large band gaps both in the antiferromagnetic (AFM) and the paramagnetic (PM) phases of the classic Mott insulators MnO, FeO, CoO, and NiO has traditionally been discussed in terms of theoretical methods requiring both (i) simple (often primitive) unit cells and (ii) correlated-electron methodologies. We show that if condition (i) is avoided (by using supercells, such as PM special quasi-random structures, in which chemically identical atoms can have different local environments), then even without condition (ii) one can describe the gaps and moments within a single-determinant DFT band structure approach. In this approach gapping is caused by basic structure, magnetism, and bonding effects underlying DFT, not via dynamic correlation (absent from DFT). As long as correlation is simplistically considered as ``anything that DFT does not get right'', gap formation in the AFM and PM phases is not due to correlation. This result defines the minimal theoretical methods needed to explain gapping and points to the possibility that some transition-metal oxides generally considered to have localized electrons detrimental to transport, could, in fact, rejoin the family of electronic semiconductors, to the benefit of a carrier transport technologies. A. Z. supported by DOE-OS-BES-MSE, Grant DE-FG02-13ER46959.
Adam, J; Adamová, D; Aggarwal, M M; Aglieri Rinella, G; Agnello, M; Agrawal, N; Ahammed, Z; Ahmad, S; Ahn, S U; Aiola, S; Akindinov, A; Alam, S N; Albuquerque, D S D; Aleksandrov, D; Alessandro, B; Alexandre, D; Alfaro Molina, R; Alici, A; Alkin, A; Almaraz, J R M; Alme, J; Alt, T; Altinpinar, S; Altsybeev, I; Alves Garcia Prado, C; Andrei, C; Andronic, A; Anguelov, V; Antičić, T; Antinori, F; Antonioli, P; Aphecetche, L; Appelshäuser, H; Arcelli, S; Arnaldi, R; Arnold, O W; Arsene, I C; Arslandok, M; Audurier, B; Augustinus, A; Averbeck, R; Azmi, M D; Badalà, A; Baek, Y W; Bagnasco, S; Bailhache, R; Bala, R; Balasubramanian, S; Baldisseri, A; Baral, R C; Barbano, A M; Barbera, R; Barile, F; Barnaföldi, G G; Barnby, L S; Barret, V; Bartalini, P; Barth, K; Bartke, J; Bartsch, E; Basile, M; Bastid, N; Basu, S; Bathen, B; Batigne, G; Batista Camejo, A; Batyunya, B; Batzing, P C; Bearden, I G; Beck, H; Bedda, C; Behera, N K; Belikov, I; Bellini, F; Bello Martinez, H; Bellwied, R; Belmont, R; Belmont-Moreno, E; Beltran, L G E; Belyaev, V; Bencedi, G; Beole, S; Berceanu, I; Bercuci, A; Berdnikov, Y; Berenyi, D; Bertens, R A; Berzano, D; Betev, L; Bhasin, A; Bhat, I R; Bhati, A K; Bhattacharjee, B; Bhom, J; Bianchi, L; Bianchi, N; Bianchin, C; Bielčík, J; Bielčíková, J; Bilandzic, A; Biro, G; Biswas, R; Biswas, S; Bjelogrlic, S; Blair, J T; Blau, D; Blume, C; Bock, F; Bogdanov, A; Bøggild, H; Boldizsár, L; Bombara, M; Book, J; Borel, H; Borissov, A; Borri, M; Bossú, F; Botta, E; Bourjau, C; Braun-Munzinger, P; Bregant, M; Breitner, T; Broker, T A; Browning, T A; Broz, M; Brucken, E J; Bruna, E; Bruno, G E; Budnikov, D; Buesching, H; Bufalino, S; Buncic, P; Busch, O; Buthelezi, Z; Butt, J B; Buxton, J T; Cabala, J; Caffarri, D; Cai, X; Caines, H; Calero Diaz, L; Caliva, A; Calvo Villar, E; Camerini, P; Carena, F; Carena, W; Carnesecchi, F; Castillo Castellanos, J; Castro, A J; Casula, E A R; Ceballos Sanchez, C; Cepila, J; Cerello, P; Cerkala, J; Chang, B; Chapeland, S; Chartier, M; Charvet, J L; Chattopadhyay, S; Chattopadhyay, S; Chauvin, A; Chelnokov, V; Cherney, M; Cheshkov, C; Cheynis, B; Chibante Barroso, V; Chinellato, D D; Cho, S; Chochula, P; Choi, K; Chojnacki, M; Choudhury, S; Christakoglou, P; Christensen, C H; Christiansen, P; Chujo, T; Chung, S U; Cicalo, C; Cifarelli, L; Cindolo, F; Cleymans, J; Colamaria, F; Colella, D; Collu, A; Colocci, M; Conesa Balbastre, G; Conesa Del Valle, Z; Connors, M E; Contreras, J G; Cormier, T M; Corrales Morales, Y; Cortés Maldonado, I; Cortese, P; Cosentino, M R; Costa, F; Crochet, P; Cruz Albino, R; Cuautle, E; Cunqueiro, L; Dahms, T; Dainese, A; Danisch, M C; Danu, A; Das, D; Das, I; Das, S; Dash, A; Dash, S; De, S; De Caro, A; de Cataldo, G; de Conti, C; de Cuveland, J; De Falco, A; De Gruttola, D; De Marco, N; De Pasquale, S; De Souza, R D; Deisting, A; Deloff, A; Dénes, E; Deplano, C; Dhankher, P; Di Bari, D; Di Mauro, A; Di Nezza, P; Di Ruzza, B; Diaz Corchero, M A; Dietel, T; Dillenseger, P; Divià, R; Djuvsland, Ø; Dobrin, A; Domenicis Gimenez, D; Dönigus, B; Dordic, O; Drozhzhova, T; Dubey, A K; Dubla, A; Ducroux, L; Dupieux, P; Ehlers, R J; Elia, D; Endress, E; Engel, H; Epple, E; Erazmus, B; Erdemir, I; Erhardt, F; Espagnon, B; Estienne, M; Esumi, S; Eum, J; Evans, D; Evdokimov, S; Eyyubova, G; Fabbietti, L; Fabris, D; Faivre, J; Fantoni, A; Fasel, M; Feldkamp, L; Feliciello, A; Feofilov, G; Ferencei, J; Fernández Téllez, A; Ferreiro, E G; Ferretti, A; Festanti, A; Feuillard, V J G; Figiel, J; Figueredo, M A S; Filchagin, S; Finogeev, D; Fionda, F M; Fiore, E M; Fleck, M G; Floris, M; Foertsch, S; Foka, P; Fokin, S; Fragiacomo, E; Francescon, A; Francisco, A; Frankenfeld, U; Fronze, G G; Fuchs, U; Furget, C; Furs, A; Fusco Girard, M; Gaardhøje, J J; Gagliardi, M; Gago, A M; Gajdosova, K; Gallio, M; Galvan, C D; Gangadharan, D R; Ganoti, P; Gao, C; Garabatos, C; Garcia-Solis, E; Gargiulo, C; Gasik, P; Gauger, E F; Germain, M; Gheata, M; Ghosh, P; Ghosh, S K; Gianotti, P; Giubellino, P; Giubilato, P; Gladysz-Dziadus, E; Glässel, P; Goméz Coral, D M; Gomez Ramirez, A; Gonzalez, A S; Gonzalez, V; González-Zamora, P; Gorbunov, S; Görlich, L; Gotovac, S; Grabski, V; Grachov, O A; Graczykowski, L K; Graham, K L; Grelli, A; Grigoras, A; Grigoras, C; Grigoriev, V; Grigoryan, A; Grigoryan, S; Grinyov, B; Grion, N; Gronefeld, J M; Grosse-Oetringhaus, J F; Grosso, R; Gruber, L; Guber, F; Guernane, R; Guerzoni, B; Gulbrandsen, K; Gunji, T; Gupta, A; Gupta, R; Haake, R; Haaland, Ø; Hadjidakis, C; Haiduc, M; Hamagaki, H; Hamar, G; Hamon, J C; Harris, J W; Harton, A; Hatzifotiadou, D; Hayashi, S; Heckel, S T; Hellbär, E; Helstrup, H; Herghelegiu, A; Herrera Corral, G; Hess, B A; Hetland, K F; Hillemanns, H; Hippolyte, B; Horak, D; Hosokawa, R; Hristov, P; Hughes, C; Humanic, T J; Hussain, N; Hussain, T; Hutter, D; Hwang, D S; Ilkaev, R; Inaba, M; Incani, E; Ippolitov, M; Irfan, M; Ivanov, M; Ivanov, V; Izucheev, V; Jacak, B; Jacazio, N; Jacobs, P M; Jadhav, M B; Jadlovska, S; Jadlovsky, J; Jahnke, C; Jakubowska, M J; Jang, H J; Janik, M A; Jayarathna, P H S Y; Jena, C; Jena, S; Jimenez Bustamante, R T; Jones, P G; Jusko, A; Kalinak, P; Kalweit, A; Kamin, J; Kang, J H; Kaplin, V; Kar, S; Karasu Uysal, A; Karavichev, O; Karavicheva, T; Karayan, L; Karpechev, E; Kebschull, U; Keidel, R; Keijdener, D L D; Keil, M; Mohisin Khan, M; Khan, P; Khan, S A; Khanzadeev, A; Kharlov, Y; Kileng, B; Kim, D W; Kim, D J; Kim, D; Kim, H; Kim, J S; Kim, J; Kim, M; Kim, S; Kim, T; Kirsch, S; Kisel, I; Kiselev, S; Kisiel, A; Kiss, G; Klay, J L; Klein, C; Klein, J; Klein-Bösing, C; Klewin, S; Kluge, A; Knichel, M L; Knospe, A G; Kobdaj, C; Kofarago, M; Kollegger, T; Kolojvari, A; Kondratiev, V; Kondratyeva, N; Kondratyuk, E; Konevskikh, A; Kopcik, M; Kour, M; Kouzinopoulos, C; Kovalenko, O; Kovalenko, V; Kowalski, M; Koyithatta Meethaleveedu, G; Králik, I; Kravčáková, A; Krivda, M; Krizek, F; Kryshen, E; Krzewicki, M; Kubera, A M; Kučera, V; Kuhn, C; Kuijer, P G; Kumar, A; Kumar, J; Kumar, L; Kumar, S; Kurashvili, P; Kurepin, A; Kurepin, A B; Kuryakin, A; Kweon, M J; Kwon, Y; La Pointe, S L; La Rocca, P; Ladron de Guevara, P; Lagana Fernandes, C; Lakomov, I; Langoy, R; Lapidus, K; Lara, C; Lardeux, A; Lattuca, A; Laudi, E; Lea, R; Leardini, L; Lee, G R; Lee, S; Lehas, F; Lehner, S; Lemmon, R C; Lenti, V; Leogrande, E; León Monzón, I; León Vargas, H; Leoncino, M; Lévai, P; Li, S; Li, X; Lien, J; Lietava, R; Lindal, S; Lindenstruth, V; Lippmann, C; Lisa, M A; Ljunggren, H M; Lodato, D F; Loenne, P I; Loginov, V; Loizides, C; Lopez, X; López Torres, E; Lowe, A; Luettig, P; Lunardon, M; Luparello, G; Lutz, T H; Maevskaya, A; Mager, M; Mahajan, S; Mahmood, S M; Maire, A; Majka, R D; Malaev, M; Maldonado Cervantes, I; Malinina, L; Mal'Kevich, D; Malzacher, P; Mamonov, A; Manko, V; Manso, F; Manzari, V; Marchisone, M; Mareš, J; Margagliotti, G V; Margotti, A; Margutti, J; Marín, A; Markert, C; Marquard, M; Martin, N A; Martin Blanco, J; Martinengo, P; Martínez, M I; Martínez García, G; Martinez Pedreira, M; Mas, A; Masciocchi, S; Masera, M; Masoni, A; Mastroserio, A; Matyja, A; Mayer, C; Mazer, J; Mazzoni, M A; Mcdonald, D; Meddi, F; Melikyan, Y; Menchaca-Rocha, A; Meninno, E; Mercado Pérez, J; Meres, M; Miake, Y; Mieskolainen, M M; Mikhaylov, K; Milano, L; Milosevic, J; Mischke, A; Mishra, A N; Miśkowiec, D; Mitra, J; Mitu, C M; Mohammadi, N; Mohanty, B; Molnar, L; Montaño Zetina, L; Montes, E; Moreira De Godoy, D A; Moreno, L A P; Moretto, S; Morreale, A; Morsch, A; Muccifora, V; Mudnic, E; Mühlheim, D; Muhuri, S; Mukherjee, M; Mulligan, J D; Munhoz, M G; Münning, K; Munzer, R H; Murakami, H; Murray, S; Musa, L; Musinsky, J; Naik, B; Nair, R; Nandi, B K; Nania, R; Nappi, E; Naru, M U; Natal da Luz, H; Nattrass, C; Navarro, S R; Nayak, K; Nayak, R; Nayak, T K; Nazarenko, S; Nedosekin, A; Nellen, L; Ng, F; Nicassio, M; Niculescu, M; Niedziela, J; Nielsen, B S; Nikolaev, S; Nikulin, S; Nikulin, V; Noferini, F; Nomokonov, P; Nooren, G; Noris, J C C; Norman, J; Nyanin, A; Nystrand, J; Oeschler, H; Oh, S; Oh, S K; Ohlson, A; Okatan, A; Okubo, T; Oleniacz, J; Oliveira Da Silva, A C; Oliver, M H; Onderwaater, J; Oppedisano, C; Orava, R; Oravec, M; Ortiz Velasquez, A; Oskarsson, A; Otwinowski, J; Oyama, K; Ozdemir, M; Pachmayer, Y; Pagano, D; Pagano, P; Paić, G; Pal, S K; Pan, J; Pandey, A K; Papikyan, V; Pappalardo, G S; Pareek, P; Park, W J; Parmar, S; Passfeld, A; Paticchio, V; Patra, R N; Paul, B; Pei, H; Peitzmann, T; Peng, X; Pereira Da Costa, H; Peresunko, D; Perez Lezama, E; Peskov, V; Pestov, Y; Petráček, V; Petrov, V; Petrovici, M; Petta, C; Piano, S; Pikna, M; Pillot, P; Pimentel, L O D L; Pinazza, O; Pinsky, L; Piyarathna, D B; Płoskoń, M; Planinic, M; Pluta, J; Pochybova, S; Podesta-Lerma, P L M; Poghosyan, M G; Polichtchouk, B; Poljak, N; Poonsawat, W; Pop, A; Poppenborg, H; Porteboeuf-Houssais, S; Porter, J; Pospisil, J; Prasad, S K; Preghenella, R; Prino, F; Pruneau, C A; Pshenichnov, I; Puccio, M; Puddu, G; Pujahari, P; Punin, V; Putschke, J; Qvigstad, H; Rachevski, A; Raha, S; Rajput, S; Rak, J; Rakotozafindrabe, A; Ramello, L; Rami, F; Raniwala, R; Raniwala, S; Räsänen, S S; Rascanu, B T; Rathee, D; Read, K F; Redlich, K; Reed, R J; Rehman, A; Reichelt, P; Reidt, F; Ren, X; Renfordt, R; Reolon, A R; Reshetin, A; Reygers, K; Riabov, V; Ricci, R A; Richert, T; Richter, M; Riedler, P; Riegler, W; Riggi, F; Ristea, C; Rocco, E; Rodríguez Cahuantzi, M; Rodriguez Manso, A; Røed, K; Rogochaya, E; Rohr, D; Röhrich, D; Ronchetti, F; Ronflette, L; Rosnet, P; Rossi, A; Roukoutakis, F; Roy, A; Roy, C; Roy, P; Rubio Montero, A J; Rui, R; Russo, R; Ryabinkin, E; Ryabov, Y; Rybicki, A; Saarinen, S; Sadhu, S; Sadovsky, S; Šafařík, K; Sahlmuller, B; Sahoo, P; Sahoo, R; Sahoo, S; Sahu, P K; Saini, J; Sakai, S; Saleh, M A; Salzwedel, J; Sambyal, S; Samsonov, V; Šándor, L; Sandoval, A; Sano, M; Sarkar, D; Sarkar, N; Sarma, P; Scapparone, E; Scarlassara, F; Schiaua, C; Schicker, R; Schmidt, C; Schmidt, H R; Schmidt, M; Schuchmann, S; Schukraft, J; Schulc, M; Schutz, Y; Schwarz, K; Schweda, K; Scioli, G; Scomparin, E; Scott, R; Šefčík, M; Seger, J E; Sekiguchi, Y; Sekihata, D; Selyuzhenkov, I; Senosi, K; Senyukov, S; Serradilla, E; Sevcenco, A; Shabanov, A; Shabetai, A; Shadura, O; Shahoyan, R; Shahzad, M I; Shangaraev, A; Sharma, A; Sharma, M; Sharma, M; Sharma, N; Sheikh, A I; Shigaki, K; Shou, Q; Shtejer, K; Sibiriak, Y; Siddhanta, S; Sielewicz, K M; Siemiarczuk, T; Silvermyr, D; Silvestre, C; Simatovic, G; Simonetti, G; Singaraju, R; Singh, R; Singhal, V; Sinha, T; Sitar, B; Sitta, M; Skaali, T B; Slupecki, M; Smirnov, N; Snellings, R J M; Snellman, T W; Song, J; Song, M; Song, Z; Soramel, F; Sorensen, S; Sozzi, F; Spacek, M; Spiriti, E; Sputowska, I; Spyropoulou-Stassinaki, M; Stachel, J; Stan, I; Stankus, P; Stenlund, E; Steyn, G; Stiller, J H; Stocco, D; Strmen, P; Suaide, A A P; Sugitate, T; Suire, C; Suleymanov, M; Suljic, M; Sultanov, R; Šumbera, M; Sumowidagdo, S; Szabo, A; Szarka, I; Szczepankiewicz, A; Szymanski, M; Tabassam, U; Takahashi, J; Tambave, G J; Tanaka, N; Tarhini, M; Tariq, M; Tarzila, M G; Tauro, A; Tejeda Muñoz, G; Telesca, A; Terasaki, K; Terrevoli, C; Teyssier, B; Thäder, J; Thakur, D; Thomas, D; Tieulent, R; Tikhonov, A; Timmins, A R; Toia, A; Trogolo, S; Trombetta, G; Trubnikov, V; Trzaska, W H; Tsuji, T; Tumkin, A; Turrisi, R; Tveter, T S; Ullaland, K; Uras, A; Usai, G L; Utrobicic, A; Vala, M; Valencia Palomo, L; Vallero, S; Van Der Maarel, J; Van Hoorne, J W; van Leeuwen, M; Vanat, T; Vande Vyvre, P; Varga, D; Vargas, A; Vargyas, M; Varma, R; Vasileiou, M; Vasiliev, A; Vauthier, A; Vázquez Doce, O; Vechernin, V; Veen, A M; Veldhoen, M; Velure, A; Vercellin, E; Vergara Limón, S; Vernet, R; Verweij, M; Vickovic, L; Viinikainen, J; Vilakazi, Z; Villalobos Baillie, O; Villatoro Tello, A; Vinogradov, A; Vinogradov, L; Vinogradov, Y; Virgili, T; Vislavicius, V; Viyogi, Y P; Vodopyanov, A; Völkl, M A; Voloshin, K; Voloshin, S A; Volpe, G; von Haller, B; Vorobyev, I; Vranic, D; Vrláková, J; Vulpescu, B; Wagner, B; Wagner, J; Wang, H; Wang, M; Watanabe, D; Watanabe, Y; Weber, M; Weber, S G; Weiser, D F; Wessels, J P; Westerhoff, U; Whitehead, A M; Wiechula, J; Wikne, J; Wilk, G; Wilkinson, J; Williams, M C S; Windelband, B; Winn, M; Yang, P; Yano, S; Yasin, Z; Yin, Z; Yokoyama, H; Yoo, I-K; Yoon, J H; Yurchenko, V; Zaborowska, A; Zaccolo, V; Zaman, A; Zampolli, C; Zanoli, H J C; Zaporozhets, S; Zardoshti, N; Zarochentsev, A; Závada, P; Zaviyalov, N; Zbroszczyk, H; Zgura, I S; Zhalov, M; Zhang, H; Zhang, X; Zhang, Y; Zhang, C; Zhang, Z; Zhao, C; Zhigareva, N; Zhou, D; Zhou, Y; Zhou, Z; Zhu, H; Zhu, J; Zichichi, A; Zimmermann, A; Zimmermann, M B; Zinovjev, G; Zyzak, M
2016-10-28
We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus-nucleus collisions, obtained for the first time using a new analysis method based on multiparticle cumulants in mixed harmonics. This novel method is robust against systematic biases originating from nonflow effects and by construction any dependence on symmetry planes is eliminated. We demonstrate that correlations of flow harmonics exhibit a better sensitivity to medium properties than the individual flow harmonics. The new measurements are performed in Pb-Pb collisions at the center-of-mass energy per nucleon pair of sqrt[s_{NN}]=2.76 TeV by the ALICE experiment at the Large Hadron Collider. The centrality dependence of correlation between event-by-event fluctuations of the elliptic v_{2} and quadrangular v_{4} flow harmonics, as well as of anticorrelation between v_{2} and triangular v_{3} flow harmonics are presented. The results cover two different regimes of the initial state configurations: geometry dominated (in midcentral collisions) and fluctuation dominated (in the most central collisions). Comparisons are made to predictions from Monte Carlo Glauber, viscous hydrodynamics, ampt, and hijing models. Together with the existing measurements of the individual flow harmonics the presented results provide further constraints on the initial conditions and the transport properties of the system produced in heavy-ion collisions.
Quantifying the range of cross-correlated fluctuations using a q- L dependent AHXA coefficient
NASA Astrophysics Data System (ADS)
Wang, Fang; Wang, Lin; Chen, Yuming
2018-03-01
Recently, based on analogous height cross-correlation analysis (AHXA), a cross-correlation coefficient ρ×(L) has been proposed to quantify the levels of cross-correlation on different temporal scales for bivariate series. A limitation of this coefficient is that it cannot capture the full information of cross-correlations on amplitude of fluctuations. In fact, it only detects the cross-correlation at a specific order fluctuation, which might neglect some important information inherited from other order fluctuations. To overcome this disadvantage, in this work, based on the scaling of the qth order covariance and time delay L, we define a two-parameter dependent cross-correlation coefficient ρq(L) to detect and quantify the range and level of cross-correlations. This new version of ρq(L) coefficient leads to the formation of a ρq(L) surface, which not only is able to quantify the level of cross-correlations, but also allows us to identify the range of fluctuation amplitudes that are correlated in two given signals. Applications to the classical ARFIMA models and the binomial multifractal series illustrate the feasibility of this new coefficient ρq(L) . In addition, a statistical test is proposed to quantify the existence of cross-correlations between two given series. Applying our method to the real life empirical data from the 1999-2000 California electricity market, we find that the California power crisis in 2000 destroys the cross-correlation between the price and the load series but does not affect the correlation of the load series during and before the crisis.
Sampri, Alexia; Sypsa, Karla; Tsagarakis, Konstantinos P
2018-01-01
Background With the internet’s penetration and use constantly expanding, this vast amount of information can be employed in order to better assess issues in the US health care system. Google Trends, a popular tool in big data analytics, has been widely used in the past to examine interest in various medical and health-related topics and has shown great potential in forecastings, predictions, and nowcastings. As empirical relationships between online queries and human behavior have been shown to exist, a new opportunity to explore the behavior toward asthma—a common respiratory disease—is present. Objective This study aimed at forecasting the online behavior toward asthma and examined the correlations between queries and reported cases in order to explore the possibility of nowcasting asthma prevalence in the United States using online search traffic data. Methods Applying Holt-Winters exponential smoothing to Google Trends time series from 2004 to 2015 for the term “asthma,” forecasts for online queries at state and national levels are estimated from 2016 to 2020 and validated against available Google query data from January 2016 to June 2017. Correlations among yearly Google queries and between Google queries and reported asthma cases are examined. Results Our analysis shows that search queries exhibit seasonality within each year and the relationships between each 2 years’ queries are statistically significant (P<.05). Estimated forecasting models for a 5-year period (2016 through 2020) for Google queries are robust and validated against available data from January 2016 to June 2017. Significant correlations were found between (1) online queries and National Health Interview Survey lifetime asthma (r=–.82, P=.001) and current asthma (r=–.77, P=.004) rates from 2004 to 2015 and (2) between online queries and Behavioral Risk Factor Surveillance System lifetime (r=–.78, P=.003) and current asthma (r=–.79, P=.002) rates from 2004 to 2014. The correlations are negative, but lag analysis to identify the period of response cannot be employed until short-interval data on asthma prevalence are made available. Conclusions Online behavior toward asthma can be accurately predicted, and significant correlations between online queries and reported cases exist. This method of forecasting Google queries can be used by health care officials to nowcast asthma prevalence by city, state, or nationally, subject to future availability of daily, weekly, or monthly data on reported cases. This method could therefore be used for improved monitoring and assessment of the needs surrounding the current population of patients with asthma. PMID:29530839
Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆
Tang, Liansheng; Du, Pang; Wu, Chengqing
2012-01-01
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484
Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi
2017-01-01
Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolen, James; Harris, Philip; Marzani, Simone
Here, we explore the scale-dependence and correlations of jet substructure observables to improve upon existing techniques in the identification of highly Lorentz-boosted objects. Modified observables are designed to remove correlations from existing theoretically well-understood observables, providing practical advantages for experimental measurements and searches for new phenomena. We study such observables in W jet tagging and provide recommendations for observables based on considerations beyond signal and background efficiencies.
Diametrical clustering for identifying anti-correlated gene clusters.
Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman
2003-09-01
Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.
Temporal evolution of total ozone and circulation patterns over European mid-latitudes
NASA Astrophysics Data System (ADS)
Monge Sanz, B. M.; Casale, G. R.; Palmieri, S.; Siani, A. M.
2003-04-01
Linear correlation analysis and the running correlation technique are used to investigate the interannual and interdecadal variations of total ozone (TO) over several mid-latitude European locations. The study includes the longest series of ozone data, that of the Swiss station of Arosa. TO series have been related to time series of two circulation indices, the North Atlantic Oscillation Index (NAOI) and the Arctic Oscillation Index (AOI). The analysis has been performed with monthly data, and both series containing all the months of the year and winter (DJFM) series have been used. Special attention has been given to winter series, which exhibit very high correlation coefficients with NAOI and AOI; interannual variations of this relationship are studied by applying the running correlation technique. TO and circulation indices data series have been also partitioned into their different time-scale components with the Kolmogorov-Zurbenko method. Long-term components indicate the existence of strong opposite connection between total ozone and circulation patterns over the studied region during the last three decades. However, it is also observed that this relation has not always been so, and in previous times differences in the correlation amplitude and sign have been detected.
A geometric approach to non-linear correlations with intrinsic scatter
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2017-12-01
We propose a new mathematical model for n - k-dimensional non-linear correlations with intrinsic scatter in n-dimensional data. The model is based on Riemannian geometry and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated data sets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in n-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions and compare our results to existing methods. Finally, we apply our methodology to the well-known MBH-σ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is ∼6 for the data sets used, rather than the values in the range of ∼4-5 typically quoted in the literature for these data.
Correlative cryogenic tomography of cells using light and soft x-rays.
Smith, Elizabeth A; Cinquin, Bertrand P; Do, Myan; McDermott, Gerry; Le Gros, Mark A; Larabell, Carolyn A
2014-08-01
Correlated imaging is the process of imaging a specimen with two complementary modalities, and then combining the two data sets to create a highly informative, composite view. A recent implementation of this concept has been the combination of soft x-ray tomography (SXT) with fluorescence cryogenic microscopy (FCM). SXT-FCM is used to visualize cells that are held in a near-native, cryopreserved. The resultant images are, therefore, highly representative of both the cellular architecture and molecular organization in vivo. SXT quantitatively visualizes the cell and sub-cellular structures; FCM images the spatial distribution of fluorescently labeled molecules. Here, we review the characteristics of SXT-FCM, and briefly discuss how this method compares with existing correlative imaging techniques. We also describe how the incorporation of a cryo-rotation stage into a cryogenic fluorescence microscope allows acquisition of fluorescence cryogenic tomography (FCT) data. FCT is optimally suited for correlation with SXT, since both techniques image the specimen in 3-D, potentially with similar, isotropic spatial resolution. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sivaraj, Kumarasamy; Elango, Kuppanagounder P.
2008-08-01
The photo- and electro-reduction of a series of cobalt(III) complexes of the type cis-β - [Co(trien)(RC6H4NH2)Cl]Cl2 with R = H, p-OMe, p-OEt, p-Me, p-Et, p-F, and m-Me has been studied in binary propan-2-ol/water mixtures. The redox potential (E1/2) and photo-reduction quantum yield (ΦCo(II)) data were correlated with solvent and structural parameters with the aim to shed some light on the mechanism of these reactions. The correlation of E1/2 and ΦCo(II) with macroscopic solvent parameters, viz. relative permittivity, indicated that the reactivity is influenced by both specific and non-specific solute-solvent interactions. The Kamlet-Taft solvatochromic comparison method was used to separate and quantify these effects: An increase in the percentage of organic cosolvent in the medium enhances both reduction processes, and there exists a good linear correlation between E1/2 and ΦCo(II), suggesting a similar solvation of the participants in these redox processes.
Vertical multiphase flow correlations for high production rates and large tubulars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aggour, M.A.; Al-Yousef, H.Y.; Al-Muraikhi, A.J.
1996-02-01
Numerous correlations exist for predicting pressure drop in vertical multiphase flow. These correlations, however, were all developed and tested under limited operating conditions that do not match the high production rates and large tubulars normally found in the Middle East fields. This paper presents a comprehensive evaluation of existing correlations and modifications of some correlations to determine and recommend the best correlation or correlations for various field conditions. More than 400 field data sets covering tubing sizes from 2 3/8 to 7 inches, oil rates up to 23,200 B/D, water cuts up to 95%, and gas/oil ratio (GOR) up tomore » 927 scf/STB were used in this study. Considering all data combined, the Beggs and Brill correlation provided the best pressure predictions. However, the Hagedorn and Brown correlation was better for water cuts above 80%, while the Hasan and Kabir model was better for total liquid rates above 20,000 B/D. The Aziz correlation was significantly improved when the Orkiszewski flow-pattern transition criteria were used.« less
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook
1987-01-01
Various experimental, analytical, and numerical analysis methods for flow-solid interaction of a nest of cylinders subjected to cross flows are reviewed. A nest of cylinders subjected to cross flows can be found in numerous engineering applications including the Space Shuttle Maine Engine-Main Injector Assembly (SSME-MIA) and nuclear reactor heat exchangers. Despite its extreme importance in engineering applications, understanding of the flow-solid interaction process is quite limited and design of the tube banks are mostly dependent on experiments and/or experimental correlation equations. For future development of major numerical analysis methods for the flow-solid interaction of a nest of cylinders subjected to cross flow, various turbulence models, nonlinear structural dynamics, and existing laminar flow-solid interaction analysis methods are included.
A note on the kappa statistic for clustered dichotomous data.
Zhou, Ming; Yang, Zhao
2014-06-30
The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.
Walter, Stephen D.; Riddell, Corinne A.; Rabachini, Tatiana; Villa, Luisa L.; Franco, Eduardo L.
2013-01-01
Introduction Studies on the association of a polymorphism in codon 72 of the p53 tumour suppressor gene (rs1042522) with cervical neoplasia have inconsistent results. While several methods for genotyping p53 exist, they vary in accuracy and are often discrepant. Methods We used latent class models (LCM) to examine the accuracy of six methods for p53 determination, all conducted by the same laboratory. We also examined the association of p53 with cytological cervical abnormalities, recognising potential test inaccuracy. Results Pairwise disagreement between laboratory methods occurred approximately 10% of the time. Given the estimated true p53 status of each woman, we found that each laboratory method is most likely to classify a woman to her correct status. Arg/Arg women had the highest risk of squamous intraepithelial lesions (SIL). Test accuracy was independent of cytology. There was no strong evidence for correlations of test errors. Discussion Empirical analyses ignore possible laboratory errors, and so are inherently biased, but test accuracy estimated by the LCM approach is unbiased when model assumptions are met. LCM analysis avoids ambiguities arising from empirical test discrepancies, obviating the need to regard any of the methods as a “gold” standard measurement. The methods we presented here to analyse the p53 data can be applied in many other situations where multiple tests exist, but where none of them is a gold standard. PMID:23441193
Ulloa, Armando; Rodríguez, Mario H; Arredondo-Jimenez, Juan I; Fernandez-Salas, Ildefonso
2005-12-01
The lengths of gonotrophic cycle and egg development and survival rate were studied in Anopheles vestitipennis collected in horse and human-baited traps in southern Mexico. The gonotrophic cycle duration was estimated using cross-correlation analysis, whereas the survival rate was assessed using a vertical method. Daily changes of parity rates gave significant correlation indices at 3 and 4 days in the zoophilic and anthropophilic populations, respectively. The minimum time required to develop mature eggs after blood feeding was 54 and 60 h, and the survival rate was 0.93 and 0.88 in zoophilic and anthropophilic female mosquito populations, respectively. These biological differences provide additional support for the existence of subpopulations with distinctive feeding preferences within An. vestitipennis in southern Mexico.
Image annotation based on positive-negative instances learning
NASA Astrophysics Data System (ADS)
Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.
The use of a behavioral response system in the USF/NASA toxicity screening test method
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Cumming, H. J.; Packham, S. C.
1977-01-01
Relative toxicity data on the pyrolysis effluents from bisphenol A polycarbonate and wool fabric were obtained, based on visual observations of the behavior of free-moving mice and on an avoidance response behavioral paradigm of restrained rats monitored by an instrumented behavioral system. The initial experiments show an essentially 1:1 correlation between the two systems with regard to first signs of incapacitation, collapse, and death from pyrolysis effluents from polycarbonate. It is hypothesized that similarly good correlations between these two systems might exist for other materials exhibiting predominantly carbon monoxide mechanisms of intoxication. This hypothesis needs to be confirmed, however, by additional experiments. Data with wool fabric exhibited greater variability with both procedures, indicating possibly different mechanisms of intoxication for wool as compared with bisphenol A polycarbonate.