Factorial Structure of the French Version of the Rosenberg Self-Esteem Scale among the Elderly
ERIC Educational Resources Information Center
Gana, Kamel; Alaphilippe, Daniel; Bailly, Nathalie
2005-01-01
Ten different confirmatory factor analysis models, including ones with correlated traits correlated methods, correlated traits correlated uniqueness, and correlated traits uncorrelated methods, were proposed to examine the factorial structure of the French version of the Rosenberg Self-Esteem Scale (Rosenberg, 1965). In line with previous studies…
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA
2012-03-06
A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Differential correlation for sequencing data.
Siska, Charlotte; Kechris, Katerina
2017-01-19
Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
ERIC Educational Resources Information Center
Prevost, A. Toby; Mason, Dan; Griffin, Simon; Kinmonth, Ann-Louise; Sutton, Stephen; Spiegelhalter, David
2007-01-01
Practical meta-analysis of correlation matrices generally ignores covariances (and hence correlations) between correlation estimates. The authors consider various methods for allowing for covariances, including generalized least squares, maximum marginal likelihood, and Bayesian approaches, illustrated using a 6-dimensional response in a series of…
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Theoretical development and first-principles analysis of strongly correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chen
A variety of quantum many-body methods have been developed for studying the strongly correlated electron systems. We have also proposed a computationally efficient and accurate approach, named the correlation matrix renormalization (CMR) method, to address the challenges. The initial implementation of the CMR method is designed for molecules which have theoretical advantages, including small size of system, manifest mechanism and strongly correlation effect such as bond breaking process. The theoretic development and benchmark tests of the CMR method are included in this thesis. Meanwhile, ground state total energy is the most important property of electronic calculations. We also investigated anmore » alternative approach to calculate the total energy, and extended this method for magnetic anisotropy energy (MAE) of ferromagnetic materials. In addition, another theoretical tool, dynamical mean- field theory (DMFT) on top of the DFT , has also been used in electronic structure calculations for an Iridium oxide to study the phase transition, which results from an interplay of the d electrons' internal degrees of freedom.« less
DECOY: Documenting Experiences with Cigarettes and Other Tobacco in Young Adults
Berg, Carla J.; Haardörfer, Regine; Lewis, Michael; Getachew, Betelihem; Lloyd, Steven A.; Thomas, Sarah Fretti; Lanier, Angela; Trepanier, Kelleigh; Johnston, Teresa; Grimsley, Linda; Foster, Bruce; Benson, Stephanie; Smith, Alicia; Barr, Dana Boyd; Windle, Michael
2016-01-01
Objectives We examined psychographic characteristics associated with tobacco use among Project DECOY participants. Methods Project DECOY is a 2-year longitudinal mixed-methods study examining risk for tobacco use among 3418 young adults across 7 Georgia colleges/universities. Baseline measures included sociodemographics, tobacco use, and psychographics using the Values, Attitudes, and Lifestyle Scale. Bivariate and multivariable analyses were conducted to identify correlates of tobacco use. Results Past 30-day use prevalence was: 13.3% cigarettes; 11.3% little cigars/cigarillos (LCCs); 3.6% smokeless tobacco; 10.9% e-cigarettes; and 12.2% hookah. Controlling for sociodemographics, correlates of cigarette use included greater novelty seeking (p < .001) and intellectual curiosity (p = .010) and less interest in tangible creation (p = .002) and social conservatism (p < .001). Correlates of LCC use included greater novelty seeking (p < .001) and greater fashion orientation (p = .007). Correlates of smokeless tobacco use included greater novelty seeking (p = .006) and less intellectual curiosity (p < .001). Correlates of e-cigarette use included greater novelty seeking (p < .001) and less social conservatism (p = .002). Correlates of hookah use included greater novelty seeking (p < .001), fashion orientation (p = .044), and self-focused thinking (p = .002), and less social conservatism (p < .001). Conclusions Psychographic characteristics distinguish users of different tobacco products. PMID:27103410
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Functional Multiple-Set Canonical Correlation Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.
2012-01-01
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…
Topics on Test Methods for Space Systems and Operations Safety: Applicability of Experimental Data
NASA Technical Reports Server (NTRS)
Hirsch, David B.
2009-01-01
This viewgraph presentation reviews topics on test methods for space systems and operations safety through experimentation and analysis. The contents include: 1) Perception of reality through experimentation and analysis; 2) Measurements, methods, and correlations with real life; and 3) Correlating laboratory aerospace materials flammability data with data in spacecraft environments.
Core-core and core-valence correlation
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
The effect of (1s) core correlation on properties and energy separations was analyzed using full configuration-interaction (FCI) calculations. The Be 1 S - 1 P, the C 3 P - 5 S and CH+ 1 Sigma + or - 1 Pi separations, and CH+ spectroscopic constants, dipole moment and 1 Sigma + - 1 Pi transition dipole moment were studied. The results of the FCI calculations are compared to those obtained using approximate methods. In addition, the generation of atomic natural orbital (ANO) basis sets, as a method for contracting a primitive basis set for both valence and core correlation, is discussed. When both core-core and core-valence correlation are included in the calculation, no suitable truncated CI approach consistently reproduces the FCI, and contraction of the basis set is very difficult. If the (nearly constant) core-core correlation is eliminated, and only the core-valence correlation is included, CASSCF/MRCI approached reproduce the FCI results and basis set contraction is significantly easier.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less
Ma, Chuang; Wang, Xiangfeng
2012-09-01
One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey's biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses.
Ma, Chuang; Wang, Xiangfeng
2012-01-01
One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey’s biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses. PMID:22797655
ERIC Educational Resources Information Center
Williams, Kathryn R.; King, Roy W.
1990-01-01
Examined are some of the types of two-dimensional spectra. Their application to nuclear magnetic resonance for the elucidation of molecular structure is discussed. Included are J spectroscopy, H-H correlation spectroscopy, heteronuclear correlation spectroscopy, carbon-carbon correlation, nuclear Overhauser effect correlation, experimental…
H4: A challenging system for natural orbital functional approximations
NASA Astrophysics Data System (ADS)
Ramos-Cordoba, Eloy; Lopez, Xabier; Piris, Mario; Matito, Eduard
2015-10-01
The correct description of nondynamic correlation by electronic structure methods not belonging to the multireference family is a challenging issue. The transition of D2h to D4h symmetry in H4 molecule is among the most simple archetypal examples to illustrate the consequences of missing nondynamic correlation effects. The resurgence of interest in density matrix functional methods has brought several new methods including the family of Piris Natural Orbital Functionals (PNOF). In this work, we compare PNOF5 and PNOF6, which include nondynamic electron correlation effects to some extent, with other standard ab initio methods in the H4 D4h/D2h potential energy surface (PES). Thus far, the wrongful behavior of single-reference methods at the D2h-D4h transition of H4 has been attributed to wrong account of nondynamic correlation effects, whereas in geminal-based approaches, it has been assigned to a wrong coupling of spins and the localized nature of the orbitals. We will show that actually interpair nondynamic correlation is the key to a cusp-free qualitatively correct description of H4 PES. By introducing interpair nondynamic correlation, PNOF6 is shown to avoid cusps and provide the correct smooth PES features at distances close to the equilibrium, total and local spin properties along with the correct electron delocalization, as reflected by natural orbitals and multicenter delocalization indices.
Two-dimensional correlation spectroscopy — Biannual survey 2007-2009
NASA Astrophysics Data System (ADS)
Noda, Isao
2010-06-01
The publication activities in the field of 2D correlation spectroscopy are surveyed with the emphasis on papers published during the last two years. Pertinent review articles and conference proceedings are discussed first, followed by the examination of noteworthy developments in the theory and applications of 2D correlation spectroscopy. Specific topics of interest include Pareto scaling, analysis of randomly sampled spectra, 2D analysis of data obtained under multiple perturbations, evolution of 2D spectra along additional variables, comparison and quantitative analysis of multiple 2D spectra, orthogonal sample design to eliminate interfering cross peaks, quadrature orthogonal signal correction and other data transformation techniques, data pretreatment methods, moving window analysis, extension of kernel and global phase angle analysis, covariance and correlation coefficient mapping, variant forms of sample-sample correlation, and different display methods. Various static and dynamic perturbation methods used in 2D correlation spectroscopy, e.g., temperature, composition, chemical reactions, H/D exchange, physical phenomena like sorption, diffusion and phase transitions, optical and biological processes, are reviewed. Analytical probes used in 2D correlation spectroscopy include IR, Raman, NIR, NMR, X-ray, mass spectrometry, chromatography, and others. Application areas of 2D correlation spectroscopy are diverse, encompassing synthetic and natural polymers, liquid crystals, proteins and peptides, biomaterials, pharmaceuticals, food and agricultural products, solutions, colloids, surfaces, and the like.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2013-05-01
Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile...cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity metric. For comparative purposes, clustered heat maps included...non-mTBI cases were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity
NASA Astrophysics Data System (ADS)
Rodionov, A. A.; Turchin, V. I.
2017-06-01
We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA
2008-05-13
A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Feasibility study consisting of a review of contour generation methods from stereograms
NASA Technical Reports Server (NTRS)
Kim, C. J.; Wyant, J. C.
1980-01-01
A review of techniques for obtaining contour information from stereo pairs is given. Photogrammetric principles including a description of stereoscopic vision are presented. The use of conventional contour generation methods, such as the photogrammetric plotting technique, electronic correlator, and digital correlator are described. Coherent optical techniques for contour generation are discussed and compared to the electronic correlator. The optical techniques are divided into two categories: (1) image plane operation and (2) frequency plane operation. The description of image plane correlators are further divided into three categories: (1) image to image correlator, (2) interferometric correlator, and (3) positive negative transparencies. The frequency plane correlators are divided into two categories: (1) correlation of Fourier transforms, and (2) filtering techniques.
Protein structure recognition: From eigenvector analysis to structural threading method
NASA Astrophysics Data System (ADS)
Cao, Haibo
In this work, we try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. We found a strong correlation between amino acid sequence and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, we give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part include discussions of interactions among amino acids residues, lattice HP model, and the designablity principle. In the second part, we try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in our eigenvector study of protein contact matrix. We believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, we discuss a threading method based on the correlation between amino acid sequence and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, we list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches.
Method for spatially distributing a population
Bright, Edward A [Knoxville, TN; Bhaduri, Budhendra L [Knoxville, TN; Coleman, Phillip R [Knoxville, TN; Dobson, Jerome E [Lawrence, KS
2007-07-24
A process for spatially distributing a population count within a geographically defined area can include the steps of logically correlating land usages apparent from a geographically defined area to geospatial features in the geographically defined area and allocating portions of the population count to regions of the geographically defined area having the land usages, according to the logical correlation. The process can also include weighing the logical correlation for determining the allocation of portions of the population count and storing the allocated portions within a searchable data store. The logically correlating step can include the step of logically correlating time-based land usages to geospatial features of the geographically defined area. The process can also include obtaining a population count for the geographically defined area, organizing the geographically defined area into a plurality of sectors, and verifying the allocated portions according to direct observation.
Geophysical methods in Geology. Second edition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, P.V.
This book presents an introduction to the methods of geophysics and their application to geological problems. The text emphasizes the broader aspects of geophysics, including the way in which geophysical methods help solve structural, correlational, and geochromological problems. Stress is laid on the principles and applications of methods rather than on instrumental techniques. This edition includes coverage of recent developments in geophysics and geology. New topics are introduced, including paleomagnetic methods, electromagnetic methods, microplate tectronics, and the use of multiple geophysical techniques.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
Shin, Saemi; Moon, Hyung-Il; Lee, Kwon Seob; Hong, Mun Ki; Byeon, Sang-Hoon
2014-11-20
This study aimed to devise a method for prioritizing hazardous chemicals for further regulatory action. To accomplish this objective, we chose appropriate indicators and algorithms. Nine indicators from the Globally Harmonized System of Classification and Labeling of Chemicals were used to identify categories to which the authors assigned numerical scores. Exposure indicators included handling volume, distribution, and exposure level. To test the method devised by this study, sixty-two harmful substances controlled by the Occupational Safety and Health Act in Korea, including acrylamide, acrylonitrile, and styrene were ranked using this proposed method. The correlation coefficients between total score and each indicator ranged from 0.160 to 0.641, and those between total score and hazard indicators ranged from 0.603 to 0.641. The latter were higher than the correlation coefficients between total score and exposure indicators, which ranged from 0.160 to 0.421. Correlations between individual indicators were low (-0.240 to 0.376), except for those between handling volume and distribution (0.613), suggesting that each indicator was not strongly correlated. The low correlations between each indicator mean that the indicators and independent and were well chosen for prioritizing harmful chemicals. This method proposed by this study can improve the cost efficiency of chemical management as utilized in occupational regulatory systems.
Parallel image logical operations using cross correlation
NASA Technical Reports Server (NTRS)
Strong, J. P., III
1972-01-01
Methods are presented for counting areas in an image in a parallel manner using noncoherent optical techniques. The techniques presented include the Levialdi algorithm for counting, optical techniques for binary operations, and cross-correlation.
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
We report on the first part of a study of electron-hydrogen scattering, using a method which allows for the ab initio calculation of total and elastic cross sections at higher energies. In its general form the method uses complex 'radial' correlation functions, in a (Kohn) T-matrix formalism. The titled method, abbreviated Complex Correlation Kohn T (CCKT) method, is reviewed, in the context of electron-hydrogen scattering, including the derivation of the equation for the (complex) scattering function, and the extraction of the scattering information from the latter. The calculation reported here is restricted to S-waves in the elastic region, where the correlation functions can be taken, without loss of generality, to be real. Phase shifts are calculated using Hylleraas-type correlation functions with up to 95 terms. Results are rigorous lower bounds; they are in general agreement with those of Schwartz, but they are more accurate and outside his error bounds at a couple of energies,
Pickering, Ethan M; Hossain, Mohammad A; Mousseau, Jack P; Swanson, Rachel A; French, Roger H; Abramson, Alexis R
2017-01-01
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). The utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged-Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15-minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickering, Ethan M.; Hossain, Mohammad A.; Mousseau, Jack P.
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). Themore » utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged- Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15- minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.« less
Pickering, Ethan M.; Hossain, Mohammad A.; Mousseau, Jack P.; ...
2017-10-31
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). Themore » utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged- Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15- minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.« less
Hossain, Mohammad A.; Mousseau, Jack P.; Swanson, Rachel A.; French, Roger H.; Abramson, Alexis R.
2017-01-01
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). The utility of a cross-sectional analysis of a sample set of building’s electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged-Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15-minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures. PMID:29088269
Tong, Mingsi; Song, John; Chu, Wei; Thompson, Robert M
2014-01-01
The Congruent Matching Cells (CMC) method for ballistics identification was invented at the National Institute of Standards and Technology (NIST). The CMC method is based on the correlation of pairs of small correlation cells instead of the correlation of entire images. Four identification parameters – TCCF, Tθ, Tx and Ty are proposed for identifying correlated cell pairs originating from the same firearm. The correlation conclusion (matching or non-matching) is determined by whether the number of CMC is ≥ 6. This method has been previously validated using a set of 780 pair-wise 3D topography images. However, most ballistic images stored in current local and national databases are in an optical intensity (grayscale) format. As a result, the reliability of applying the CMC method on optical intensity images is an important issue. In this paper, optical intensity images of breech face impressions captured on the same set of 40 cartridge cases are correlated and analyzed for the validation test of CMC method using optical images. This includes correlations of 63 pairs of matching images and 717 pairs of non-matching images under top ring lighting. Tests of the method do not produce any false identification (false positive) or false exclusion (false negative) results, which support the CMC method and the proposed identification criterion, C = 6, for firearm breech face identifications using optical intensity images. PMID:26601045
Tong, Mingsi; Song, John; Chu, Wei; Thompson, Robert M
2014-01-01
The Congruent Matching Cells (CMC) method for ballistics identification was invented at the National Institute of Standards and Technology (NIST). The CMC method is based on the correlation of pairs of small correlation cells instead of the correlation of entire images. Four identification parameters - T CCF, T θ, T x and T y are proposed for identifying correlated cell pairs originating from the same firearm. The correlation conclusion (matching or non-matching) is determined by whether the number of CMC is ≥ 6. This method has been previously validated using a set of 780 pair-wise 3D topography images. However, most ballistic images stored in current local and national databases are in an optical intensity (grayscale) format. As a result, the reliability of applying the CMC method on optical intensity images is an important issue. In this paper, optical intensity images of breech face impressions captured on the same set of 40 cartridge cases are correlated and analyzed for the validation test of CMC method using optical images. This includes correlations of 63 pairs of matching images and 717 pairs of non-matching images under top ring lighting. Tests of the method do not produce any false identification (false positive) or false exclusion (false negative) results, which support the CMC method and the proposed identification criterion, C = 6, for firearm breech face identifications using optical intensity images.
Umay, Ebru Karaca; Unlu, Ece; Saylam, Guleser Kılıc; Cakci, Aytul; Korkmaz, Hakan
2013-09-01
We aimed in this study to evaluate dysphagia in early stroke patients using a bedside screening test and flexible fiberoptic endoscopic evaluation of swallowing (FFEES) and electrophysiological evaluation (EE) methods and to compare the effectiveness of these methods. Twenty-four patients who were hospitalized in our clinic within the first 3 months after stroke were included in this study. Patients were evaluated using a bedside screening test [including bedside dysphagia score (BDS), neurological examination dysphagia score (NEDS), and total dysphagia score (TDS)] and FFEES and EE methods. Patients were divided into normal-swallowing and dysphagia groups according to the results of the evaluation methods. Patients with dysphagia as determined by any of these methods were compared to the patients with normal swallowing based on the results of the other two methods. Based on the results of our study, a high BDS was positively correlated with dysphagia identified by FFEES and EE methods. Moreover, the FFEES and EE methods were positively correlated. There was no significant correlation between NEDS and TDS levels and either EE or FFEES method. Bedside screening tests should be used mainly as an initial screening test; then FFEES and EE methods should be combined in patients who show risks. This diagnostic algorithm may provide a practical and fast solution for selected stroke patients.
NASA Technical Reports Server (NTRS)
Park, Han G. (Inventor); Zak, Michail (Inventor); James, Mark L. (Inventor); Mackey, Ryan M. E. (Inventor)
2003-01-01
A general method of anomaly detection from time-correlated sensor data is disclosed. Multiple time-correlated signals are received. Their cross-signal behavior is compared against a fixed library of invariants. The library is constructed during a training process, which is itself data-driven using the same time-correlated signals. The method is applicable to a broad class of problems and is designed to respond to any departure from normal operation, including faults or events that lie outside the training envelope.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps
NASA Technical Reports Server (NTRS)
Hord, J.
1974-01-01
The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.
Giger, Maryellen L.; Chen, Chin-Tu; Armato, Samuel; Doi, Kunio
1999-10-26
A method and system for the computerized registration of radionuclide images with radiographic images, including generating image data from radiographic and radionuclide images of the thorax. Techniques include contouring the lung regions in each type of chest image, scaling and registration of the contours based on location of lung apices, and superimposition after appropriate shifting of the images. Specific applications are given for the automated registration of radionuclide lungs scans with chest radiographs. The method in the example given yields a system that spatially registers and correlates digitized chest radiographs with V/Q scans in order to correlate V/Q functional information with the greater structural detail of chest radiographs. Final output could be the computer-determined contours from each type of image superimposed on any of the original images, or superimposition of the radionuclide image data, which contains high activity, onto the radiographic chest image.
Cervical vertebral maturation as a biologic indicator of skeletal maturity.
Santiago, Rodrigo César; de Miranda Costa, Luiz Felipe; Vitral, Robert Willer Farinazzo; Fraga, Marcelo Reis; Bolognese, Ana Maria; Maia, Lucianne Cople
2012-11-01
To identify and review the literature regarding the reliability of cervical vertebrae maturation (CVM) staging to predict the pubertal spurt. The selection criteria included cross-sectional and longitudinal descriptive studies in humans that evaluated qualitatively or quantitatively the accuracy and reproducibility of the CVM method on lateral cephalometric radiographs, as well as the correlation with a standard method established by hand-wrist radiographs. The searches retrieved 343 unique citations. Twenty-three studies met the inclusion criteria. Six articles had moderate to high scores, while 17 of 23 had low scores. Analysis also showed a moderate to high statistically significant correlation between CVM and hand-wrist maturation methods. There was a moderate to high reproducibility of the CVM method, and only one specific study investigated the accuracy of the CVM index in detecting peak pubertal growth. This systematic review has shown that the studies on CVM method for radiographic assessment of skeletal maturation stages suffer from serious methodological failures. Better-designed studies with adequate accuracy, reproducibility, and correlation analysis, including studies with appropriate sensitivity-specificity analysis, should be performed.
An ab initio study of the C3(+) cation using multireference methods
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Martin, J. M. L.; Francois, J. P.; Gijbels, R.
1991-01-01
The energy difference between the linear 2 sigma(sup +, sub u) and cyclic 2B(sub 2) structures of C3(+) has been investigated using large (5s3p2d1f) basis sets and multireference electron correlation treatments, including complete active space self consistent fields (CASSCF), multireference configuration interaction (MRCI), and averaged coupled-pair functional (ACPF) methods, as well as the single-reference quadratic configuration interaction (QCISD(T)) method. Our best estimate, including a correction for basis set incompleteness, is that the linear form lies above the cyclic from by 5.2(+1.5 to -1.0) kcal/mol. The 2 sigma(sup +, sub u) state is probably not a transition state, but a local minimum. Reliable computation of the cyclic/linear energy difference in C3(+) is extremely demanding of the electron correlation treatment used: of the single-reference methods previously considered, CCSD(T) and QCISD(T) perform best. The MRCI + Q(0.01)/(4s2p1d) energy separation of 1.68 kcal/mol should provide a comparison standard for other electron correlation methods applied to this system.
Minenkov, Yury; Bistoni, Giovanni; Riplinger, Christoph; Auer, Alexander A; Neese, Frank; Cavallo, Luigi
2017-04-05
In this work, we tested canonical and domain based pair natural orbital coupled cluster methods (CCSD(T) and DLPNO-CCSD(T), respectively) for a set of 32 ligand exchange and association/dissociation reaction enthalpies involving ionic complexes of Li, Be, Na, Mg, Ca, Sr, Ba and Pb(ii). Two strategies were investigated: in the former, only valence electrons were included in the correlation treatment, giving rise to the computationally very efficient FC (frozen core) approach; in the latter, all non-ECP electrons were included in the correlation treatment, giving rise to the AE (all electron) approach. Apart from reactions involving Li and Be, the FC approach resulted in non-homogeneous performance. The FC approach leads to very small errors (<2 kcal mol -1 ) for some reactions of Na, Mg, Ca, Sr, Ba and Pb, while for a few reactions of Ca and Ba deviations up to 40 kcal mol -1 have been obtained. Large errors are both due to artificial mixing of the core (sub-valence) orbitals of metals and the valence orbitals of oxygen and halogens in the molecular orbitals treated as core, and due to neglecting core-core and core-valence correlation effects. These large errors are reduced to a few kcal mol -1 if the AE approach is used or the sub-valence orbitals of metals are included in the correlation treatment. On the technical side, the CCSD(T) and DLPNO-CCSD(T) results differ by a fraction of kcal mol -1 , indicating the latter method as the perfect choice when the CPU efficiency is essential. For completely black-box applications, as requested in catalysis or thermochemical calculations, we recommend the DLPNO-CCSD(T) method with all electrons that are not covered by effective core potentials included in the correlation treatment and correlation-consistent polarized core valence basis sets of cc-pwCVQZ(-PP) quality.
Pulse transmission receiver with higher-order time derivative pulse correlator
Dress, Jr., William B.; Smith, Stephen F.
2003-09-16
Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission receiver includes: a higher-order time derivative pulse correlator; a demodulation decoder coupled to the higher-order time derivative pulse correlator; a clock coupled to the demodulation decoder; and a pseudorandom polynomial generator coupled to both the higher-order time derivative pulse correlator and the clock. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
Yuan, Naiming; Fu, Zuntao; Zhang, Huan; Piao, Lin; Xoplaki, Elena; Luterbacher, Juerg
2015-01-01
In this paper, a new method, detrended partial-cross-correlation analysis (DPCCA), is proposed. Based on detrended cross-correlation analysis (DCCA), this method is improved by including partial-correlation technique, which can be applied to quantify the relations of two non-stationary signals (with influences of other signals removed) on different time scales. We illustrate the advantages of this method by performing two numerical tests. Test I shows the advantages of DPCCA in handling non-stationary signals, while Test II reveals the “intrinsic” relations between two considered time series with potential influences of other unconsidered signals removed. To further show the utility of DPCCA in natural complex systems, we provide new evidence on the winter-time Pacific Decadal Oscillation (PDO) and the winter-time Nino3 Sea Surface Temperature Anomaly (Nino3-SSTA) affecting the Summer Rainfall over the middle-lower reaches of the Yangtze River (SRYR). By applying DPCCA, better significant correlations between SRYR and Nino3-SSTA on time scales of 6 ~ 8 years are found over the period 1951 ~ 2012, while significant correlations between SRYR and PDO on time scales of 35 years arise. With these physically explainable results, we have confidence that DPCCA is an useful method in addressing complex systems. PMID:25634341
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Correlation analysis of 1 to 30 MeV celestial gamma rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, J.L.
1984-01-01
This paper outlines the development of a method of producing celestial sky maps from the data generated by the University of California, Riverside's double Compton scatter gamma ray telescope. The method makes use of a correlation between the telescope's data and theoretical calculated response functions. The results of applying this technique to northern hemisphere data obtained from a 1978 balloon flight from Palestine, Texas are included.
Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.
2013-01-01
Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
Multi-ball and one-ball geolocation and location verification
NASA Astrophysics Data System (ADS)
Nelson, D. J.; Townsend, J. L.
2017-05-01
We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.
Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.
2015-01-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614
Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T
2013-09-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.
Analysis of digital communication signals and extraction of parameters
NASA Astrophysics Data System (ADS)
Al-Jowder, Anwar
1994-12-01
The signal classification performance of four types of electronics support measure (ESM) communications detection systems is compared from the standpoint of the unintended receiver (interceptor). Typical digital communication signals considered include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), frequency shift keying (FSK), and on-off keying (OOK). The analysis emphasizes the use of available signal processing software. Detection methods compared include broadband energy detection, FFT-based narrowband energy detection, and two correlation methods which employ the fast Fourier transform (FFT). The correlation methods utilize modified time-frequency distributions, where one of these is based on the Wigner-Ville distribution (WVD). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNR's).
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.
NASA Astrophysics Data System (ADS)
Wan, Renzhi; Zu, Yunxiao; Shao, Lin
2018-04-01
The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.
A new gas dilution method for measuring body volume.
Nagao, N; Tamaki, K; Kuchiki, T; Nagao, M
1995-01-01
This study was designed to examine the validity of a new gas dilution method (GD) for measuring human body volume and to compare its accuracy with the results obtained by the underwater weighing method (UW). We measured the volume of plastic bottles and 16 subjects (including two females), aged 18-42 years with each method. For the bottles, the volume measured by hydrostatic weighing was correlated highly (r = 1.000) with that measured by the new gas dilution method. For the subjects, the body volume determined by the two methods was significantly correlated (r = 0.998). However, the subject's volume measured by the gas dilution method was significantly larger than that by underwater weighing method. There was significant correlation (r = 0.806) between GD volume-UW volume and the body mass index (BMI), so that UW volume could be predicted from GD volume and BMI. It can be concluded that the new gas dilution method offers promising possibilities for future research in the population who cannot submerge underwater. PMID:7551760
A correlated ab initio study of the A2 pi <-- X2 sigma+ transition in MgCCH
NASA Technical Reports Server (NTRS)
Woon, D. E.
1997-01-01
The A2 pi <-- X2 sigma+ transition in MgCCH was studied with correlation consistent basis sets and single- and multireference correlation methods. The A2 pi excited state was characterized in detail; the x2 sigma+ ground state has been described elsewhere recently. The estimated complete basis set (CBS) limits for valence correlation, including zero-point energy corrections, are 22668, 23191, and 22795 for the RCCSD(T), MRCI, and MRCI + Q methods, respectively. A core-valence correction of +162 cm-1 shifts the RCCSD(T) value to 22830 cm-1, in good agreement with the experimental result of 22807 cm-1.
Ghosh, Soumen; Cramer, Christopher J; Truhlar, Donald G; Gagliardi, Laura
2017-04-01
Predicting ground- and excited-state properties of open-shell organic molecules by electronic structure theory can be challenging because an accurate treatment has to correctly describe both static and dynamic electron correlation. Strongly correlated systems, i.e. , systems with near-degeneracy correlation effects, are particularly troublesome. Multiconfigurational wave function methods based on an active space are adequate in principle, but it is impractical to capture most of the dynamic correlation in these methods for systems characterized by many active electrons. We recently developed a new method called multiconfiguration pair-density functional theory (MC-PDFT), that combines the advantages of wave function theory and density functional theory to provide a more practical treatment of strongly correlated systems. Here we present calculations of the singlet-triplet gaps in oligoacenes ranging from naphthalene to dodecacene. Calculations were performed for unprecedently large orbitally optimized active spaces of 50 electrons in 50 orbitals, and we test a range of active spaces and active space partitions, including four kinds of frontier orbital partitions. We show that MC-PDFT can predict the singlet-triplet splittings for oligoacenes consistent with the best available and much more expensive methods, and indeed MC-PDFT may constitute the benchmark against which those other models should be compared, given the absence of experimental data.
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2012-01-01
The P-wave hybrid theory of electron-hydrogen elastic scattering [Phys. Rev. A 85, 052708 (2012)] is applied to the P-wave scattering from He ion. In this method, both short-range and long-range correlations are included in the Schroedinger equation at the same time, by using a combination of a modified method of polarized orbitals and the optical potential formalism. The short-correlation functions are of Hylleraas type. It is found that the phase shifts are not significantly affected by the modification of the target function by a method similar to the method of polarized orbitals and they are close to the phase shifts calculated earlier by Bhatia [Phys. Rev. A 69, 032714 (2004)]. This indicates that the correlation function is general enough to include the target distortion (polarization) in the presence of the incident electron. The important fact is that in the present calculation, to obtain similar results only a 20-term correlation function is needed in the wave function compared to the 220- term wave function required in the above-mentioned calculation. Results for the phase shifts, obtained in the present hybrid formalism, are rigorous lower bounds to the exact phase shifts. The lowest P-wave resonances in He atom and hydrogen ion have been calculated and compared with the results obtained using the Feshbach projection operator formalism [Phys. Rev. A, 11, 2018 (1975)]. It is concluded that accurate resonance parameters can be obtained by the present method, which has the advantage of including corrections due to neighboring resonances, bound states and the continuum in which these resonance are embedded.
NASA Astrophysics Data System (ADS)
Noda, Isao
2014-07-01
A comprehensive survey review of new and noteworthy developments, which are advancing forward the frontiers in the field of 2D correlation spectroscopy during the last four years, is compiled. This review covers books, proceedings, and review articles published on 2D correlation spectroscopy, a number of significant conceptual developments in the field, data pretreatment methods and other pertinent topics, as well as patent and publication trends and citation activities. Developments discussed include projection 2D correlation analysis, concatenated 2D correlation, and correlation under multiple perturbation effects, as well as orthogonal sample design, predicting 2D correlation spectra, manipulating and comparing 2D spectra, correlation strategy based on segmented data blocks, such as moving-window analysis, features like determination of sequential order and enhanced spectral resolution, statistical 2D spectroscopy using covariance and other statistical metrics, hetero-correlation analysis, and sample-sample correlation technique. Data pretreatment operations prior to 2D correlation analysis are discussed, including the correction for physical effects, background and baseline subtraction, selection of reference spectrum, normalization and scaling of data, derivatives spectra and deconvolution technique, and smoothing and noise reduction. Other pertinent topics include chemometrics and statistical considerations, peak position shift phenomena, variable sampling increments, computation and software, display schemes, such as color coded format, slice and power spectra, tabulation, and other schemes.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Correlation and prediction of gaseous diffusion coefficients.
NASA Technical Reports Server (NTRS)
Marrero, T. R.; Mason, E. A.
1973-01-01
A new correlation method for binary gaseous diffusion coefficients from very low temperatures to 10,000 K is proposed based on an extended principle of corresponding states, and having greater range and accuracy than previous correlations. There are two correlation parameters that are related to other physical quantities and that are predictable in the absence of diffusion measurements. Quantum effects and composition dependence are included, but high-pressure effects are not. The results are directly applicable to multicomponent mixtures.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
Correlation between external and internal respiratory motion: a validation study.
Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim
2012-05-01
In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.
Friend suggestion in social network based on user log
NASA Astrophysics Data System (ADS)
Kaviya, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
Simple friend recommendation algorithms such as similarity, popularity and social aspects is the basic requirement to be explored to methodically form high-performance social friend recommendation. Suggestion of friends is followed. No tags of character were followed. In the proposed system, we use an algorithm for network correlation-based social friend recommendation (NC-based SFR).It includes user activities like where one lives and works. A new friend recommendation method, based on network correlation, by considering the effect of different social roles. To model the correlation between different networks, we develop a method that aligns these networks through important feature selection. We consider by preserving the network structure for a more better recommendations so that it significantly improves the accuracy for better friend-recommendation.
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Unbiased estimates of galaxy scaling relations from photometric redshift surveys
NASA Astrophysics Data System (ADS)
Rossi, Graziano; Sheth, Ravi K.
2008-06-01
Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.
Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.
Matthaeus, W H; Weygand, J M; Dasso, S
2016-06-17
Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.
Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model
Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge
2017-01-01
Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027
NASA Astrophysics Data System (ADS)
Kong, Jing
This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.
Surveillance of industrial processes with correlated parameters
White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.
1996-01-01
A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.
Federico, Alejandro; Kaufmann, Guillermo H
2005-05-10
We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.
Improved Discrete Approximation of Laplacian of Gaussian
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2004-01-01
An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.
NASA Astrophysics Data System (ADS)
Pinnington, Ewan; Casella, Eric; Dance, Sarah; Lawless, Amos; Morison, James; Nichols, Nancy; Wilkinson, Matthew; Quaife, Tristan
2016-04-01
Forest ecosystems play an important role in sequestering human emitted carbon-dioxide from the atmosphere and therefore greatly reduce the effect of anthropogenic induced climate change. For that reason understanding their response to climate change is of great importance. Efforts to implement variational data assimilation routines with functional ecology models and land surface models have been limited, with sequential and Markov chain Monte Carlo data assimilation methods being prevalent. When data assimilation has been used with models of carbon balance, background "prior" errors and observation errors have largely been treated as independent and uncorrelated. Correlations between background errors have long been known to be a key aspect of data assimilation in numerical weather prediction. More recently, it has been shown that accounting for correlated observation errors in the assimilation algorithm can considerably improve data assimilation results and forecasts. In this paper we implement a 4D-Var scheme with a simple model of forest carbon balance, for joint parameter and state estimation and assimilate daily observations of Net Ecosystem CO2 Exchange (NEE) taken at the Alice Holt forest CO2 flux site in Hampshire, UK. We then investigate the effect of specifying correlations between parameter and state variables in background error statistics and the effect of specifying correlations in time between observation error statistics. The idea of including these correlations in time is new and has not been previously explored in carbon balance model data assimilation. In data assimilation, background and observation error statistics are often described by the background error covariance matrix and the observation error covariance matrix. We outline novel methods for creating correlated versions of these matrices, using a set of previously postulated dynamical constraints to include correlations in the background error statistics and a Gaussian correlation function to include time correlations in the observation error statistics. The methods used in this paper will allow the inclusion of time correlations between many different observation types in the assimilation algorithm, meaning that previously neglected information can be accounted for. In our experiments we compared the results using our new correlated background and observation error covariance matrices and those using diagonal covariance matrices. We found that using the new correlated matrices reduced the root mean square error in the 14 year forecast of daily NEE by 44 % decreasing from 4.22 g C m-2 day-1 to 2.38 g C m-2 day-1.
[Correlation between soil-transmitted nematode infections and children's growth].
Wang, Xiao-Bing; Wang, Guo-Fei; Zhang, Lin-Xiu; Luo, Ren-Fu; Wang, Ju-Jun; Medina, Alexis; Eggleston, Karen; Rozelle, Scott; Smith, Scott
2013-06-01
To understand the infection status of soil-transmitted nematodes in southwest China and the correlation between soil-transmitted nematode infections and children's growth. The prevalence of soil-transmitted nematode infections was determined by Kato-Katz technique, and in part of the children, the examination of Enterobius vermicularis eggs was performed by using the cellophane swab method. The influencing factors were surveyed by using a standardized questionnaire. The relationship between soil-transmitted nematode infections and children's growth was analyzed by the ordinary least square (OLS) method. A total of 1 707 children were examined, with a soil-transmitted nematode infection rate of 22.2%. The results of OLS analysis showed that there existed the negative correlation between soil-transmitted nematode infections and the indexes of children's growth including BMI, the weight-for-age Z score and height-for-age Z score. Furthermore, other correlated variables included the age, gender, educational level of mother and raising livestock and poultry, etc. Children' s retardation is still a serious issue in the southwest poor areas of China and correlated with the infections of soil-transmitted nematodes. For improving children's growth, it is greatly significant to enhance the deworming and health education about parasitic diseases in mothers.
Relationship of The Tropical Cyclogenesis With Solar and Magnetospheric Activities
NASA Astrophysics Data System (ADS)
Vishnevsky, O. V.; Pankov, V. M.; Erokhine, N. S.
Formation of tropical cyclones is a badly studied period in their life cycle even though there are many papers dedicated to analysis of influence of different parameters upon cyclones occurrence frequency (see e.g., Gray W.M.). Present paper is dedicated to study of correlation of solar and magnetospheric activity with the appearance of tropical cyclones in north-west region of Pacific ocean. Study of correlation was performed by using both classical statistical methods (including maximum entropy method) and quite modern ones, for example multifractal analysis. Information about Wolf's numbers and cyclogenesis intensity in period of 1944-2000 was received from different Internet databases. It was shown that power spectra maximums of Wolf's numbers and appeared tropical cyclones ones corresponds to 11-year period; solar activity and cyclogenesis processes intensity are in antiphase; maximum of mutual correlation coefficient (~ 0.8) between Wolf's numbers and cyclogenesis intensity is in South-China sea. There is a relation of multifractal characteristics calculated for both time series with the mutual correlation function that is another indicator of correlation between tropical cyclogenesis and solar-magnetospheric activity. So, there is the correlation between solar-magnetospheric activity and tropical cyclone intensity in this region. Possible physical mechanisms of such correlation including anomalous precipitations charged particles from the Earth radiation belts and wind intensity amplification in the troposphere are discussed.
Relationship of The Tropical Cyclogenesis With Solar and Magnetospheric Activities
NASA Astrophysics Data System (ADS)
Vishnevsky, O.; Pankov, V.; Erokhine, N.
Formation of tropical cyclones is a badly studied period in their life cycle even though there are many papers dedicated to analysis of influence of different parameters upon cyclones occurrence frequency (see e.g., Gray W.M.). Present paper is dedicated to study of correlation of solar and magnetospheric activity with the appearance of tropi- cal cyclones in north-west region of Pacific ocean. Study of correlation was performed by using both classical statistical methods (including maximum entropy method) and quite modern ones, for example multifractal analysis. Information about Wolf's num- bers and cyclogenesis intensity in period of 1944-2000 was received from different Internet databases. It was shown that power spectra maximums of Wolf's numbers and appeared tropical cyclones ones corresponds to 11-year period; solar activity and cyclogenesis processes intensity are in antiphase; maximum of mutual correlation co- efficient ( 0.8) between Wolf's numbers and cyclogenesis intensity is in South-China sea. There is a relation of multifractal characteristics calculated for both time series with the mutual correlation function that is another indicator of correlation between tropical cyclogenesis and solar-magnetospheric activity. So, there is the correlation between solar-magnetospheric activity and tropical cyclone intensity in this region. Possible physical mechanisms of such correlation including anomalous precipitations charged particles from the Earth radiation belts and wind intensity amplification in the troposphere are discussed.
Ghosh, Soumen; Cramer, Christopher J.; Truhlar, Donald G.; ...
2017-01-19
Predicting ground- and excited-state properties of open-shell organic molecules by electronic structure theory can be challenging because an accurate treatment has to correctly describe both static and dynamic electron correlation. Strongly correlated systems, i.e., systems with near-degeneracy correlation effects, are particularly troublesome. Multiconfigurational wave function methods based on an active space are adequate in principle, but it is impractical to capture most of the dynamic correlation in these methods for systems characterized by many active electrons. Here, we recently developed a new method called multiconfiguration pair-density functional theory (MC-PDFT), that combines the advantages of wave function theory and density functionalmore » theory to provide a more practical treatment of strongly correlated systems. Here we present calculations of the singlet–triplet gaps in oligoacenes ranging from naphthalene to dodecacene. Calculations were performed for unprecedently large orbitally optimized active spaces of 50 electrons in 50 orbitals, and we test a range of active spaces and active space partitions, including four kinds of frontier orbital partitions. We show that MC-PDFT can predict the singlet–triplet splittings for oligoacenes consistent with the best available and much more expensive methods, and indeed MC-PDFT may constitute the benchmark against which those other models should be compared, given the absence of experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Soumen; Cramer, Christopher J.; Truhlar, Donald G.
Predicting ground- and excited-state properties of open-shell organic molecules by electronic structure theory can be challenging because an accurate treatment has to correctly describe both static and dynamic electron correlation. Strongly correlated systems, i.e., systems with near-degeneracy correlation effects, are particularly troublesome. Multiconfigurational wave function methods based on an active space are adequate in principle, but it is impractical to capture most of the dynamic correlation in these methods for systems characterized by many active electrons. Here, we recently developed a new method called multiconfiguration pair-density functional theory (MC-PDFT), that combines the advantages of wave function theory and density functionalmore » theory to provide a more practical treatment of strongly correlated systems. Here we present calculations of the singlet–triplet gaps in oligoacenes ranging from naphthalene to dodecacene. Calculations were performed for unprecedently large orbitally optimized active spaces of 50 electrons in 50 orbitals, and we test a range of active spaces and active space partitions, including four kinds of frontier orbital partitions. We show that MC-PDFT can predict the singlet–triplet splittings for oligoacenes consistent with the best available and much more expensive methods, and indeed MC-PDFT may constitute the benchmark against which those other models should be compared, given the absence of experimental data.« less
BONNSAI: correlated stellar observables in Bayesian methods
NASA Astrophysics Data System (ADS)
Schneider, F. R. N.; Castro, N.; Fossati, L.; Langer, N.; de Koter, A.
2017-02-01
In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code Bonnsai by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounted for even if information for them are not available in specific cases but are known in general. Because the new likelihood model is a better approximation of the data, the reliability and robustness of the inferred parameters are improved. We find that neglecting correlations biases the most likely values of inferred stellar parameters and affects the precision with which these parameters can be determined. The importance of these biases depends on the strength of the correlations and the uncertainties. For example, we apply our technique to massive OB stars, but emphasise that it is valid for any type of stars. For effective temperatures and surface gravities determined from atmosphere modelling, we find that masses can be underestimated on average by 0.5σ and mass uncertainties overestimated by a factor of about 2 when neglecting correlations. At the same time, the age precisions are underestimated over a wide range of stellar parameters. We conclude that accounting for correlations is essential in order to derive reliable stellar parameters including robust uncertainties and will be vital when entering an era of precision stellar astrophysics thanks to the Gaia satellite.
Extracting physical quantities from BES data
NASA Astrophysics Data System (ADS)
Fox, Michael; Field, Anthony; Schekochihin, Alexander; van Wyk, Ferdinand; MAST Team
2015-11-01
We propose a method to extract the underlying physical properties of turbulence from measurements, thereby facilitating quantitative comparisons between theory and experiment. Beam Emission Spectroscopy (BES) diagnostics record fluctuating intensity time series, which are related to the density field in the plasma through Point-Spread Functions (PSFs). Assuming a suitable form for the correlation function of the underlying turbulence, analytical expressions are derived that relate the correlation parameters of the intensity field: the radial and poloidal correlation lengths and wavenumbers, the correlation time and the fluctuation amplitude, to the equivalent correlation properties of the density field. In many cases, the modification caused by the PSFs is substantial enough to change conclusions about physics. Our method is tested by applying PSFs to the ``real'' density field, generated by non-linear gyrokinetic simulations of MAST, to create synthetic turbulence data, from which the method successfully recovers the correlation function of the ``real'' density field. This method is applied to BES data from MAST to determine the scaling of the 2D structure of the ion-scale turbulence with equilibrium parameters, including the ExB flow shear. Work funded by the Euratom research and training programme 2014-2018 under grant agreement No 633053 and from the RCUK Energy Programme [grant number EP/I501045].
NASA Astrophysics Data System (ADS)
Llusar, Rosa; Casarrubios, Marcos; Barandiarán, Zoila; Seijo, Luis
1996-10-01
An ab initio theoretical study of the optical absorption spectrum of Ni2+-doped MgO has been conducted by means of calculations in a MgO-embedded (NiO6)10-cluster. The calculations include long- and short-range embedding effects of electrostatic and quantum nature brought about by the MgO crystalline lattice, as well as electron correlation and spin-orbit effects within the (NiO6)10- cluster. The spin-orbit calculations have been performed using the spin-orbit-CI WB-AIMP method [Chem. Phys. Lett. 147, 597 (1988); J. Chem. Phys. 102, 8078 (1995)] which has been recently proposed and is applied here for the first time to the field of impurities in crystals. The WB-AIMP method is extended in order to handle correlation effects which, being necessary to produce accurate energy differences between spin-free states, are not needed for the proper calculation of spin-orbit couplings. The extension of the WB-AIMP method, which is also aimed at keeping the size of the spin-orbit-CI within reasonable limits, is based on the use of spin-free-state shifting operators. It is shown that the unreasonable spin-orbit splittings obtained for MgO:Ni2+ in spin-orbit-CI calculations correlating only 8 electrons become correct when the proposed extension is applied, so that the same CI space is used but energy corrections due to correlating up to 26 electrons are included. The results of the ligand field spectrum of MgO:Ni2+ show good overall agreement with the experimental measurements and a reassignment of the observed Eg(b3T1g) excited state is proposed and discussed.
Murakami, H; Yoneyama, T; Nakajima, K; Kobayashi, M
2001-03-23
The objectives of this study were to prepare the lactose granules by various granulation methods using polyethylene glycol 6000 (PEG 6000) as a binder and to evaluate the effects of granulation methods on the compressibility and compactibility of granules in tabletting. Lactose was granulated by seven granulation methods -- four wet granulations including wet massing granulation, wet high-speed mixer granulation, wet fluidized bed granulation and wet tumbling fluidized bed granulation; and three melt granulations including melt high-speed mixer granulation, melt fluidized bed granulation and melt tumbling fluidized bed granulation. The loose density, angle of repose, granule size distribution, mean diameter of granules, and the tensile strength and porosity of tablets were evaluated. The compactibilities of granules were varied by the granulation methods. However, the difference in compactibility of granules could not be explained due to the difference in compressibility, since there was no difference in Heckel plots due to granulation methods. Among their granule properties, the loose density of granules seemed to have a correlation with the tablet strength regardless of the granulation methods.
NASA Astrophysics Data System (ADS)
Constantoudis, Vassilios; Papavieros, George; Lorusso, Gian; Rutigliani, Vito; Van Roey, Frieda; Gogolides, Evangelos
2018-03-01
The aim of this paper is to investigate the role of etch transfer in two challenges of LER metrology raised by recent evolutions in lithography: the effects of SEM noise and the cross-line and edge correlations. The first comes from the ongoing scaling down of linewidths, which dictates SEM imaging with less scanning frames to reduce specimen damage and hence with more noise. During the last decade, it has been shown that image noise can be an important budget of the measured LER while systematically affects and alter the PSD curve of LER at high frequencies. A recent method for unbiased LER measurement is based on the systematic Fourier or correlation analysis to decompose the effects of noise from true LER (Fourier-Correlation filtering method). The success of the method depends on the PSD and HHCF curve. Previous experimental and model works have revealed that etch transfer affects the PSD of LER reducing its high frequency values. In this work, we estimate the noise contribution to the biased LER through PSD flat floor at high frequencies and relate it with the differences between the PSDs of lithography and etched LER. Based on this comparison, we propose an improvement of the PSD/HHCF-based method for noise-free LER measurement to include the missed high frequency real LER. The second issue is related with the increased density of lithographic patterns and the special characteristics of DSA and MP lithography patterns exhibits. In a previous work, we presented an enlarged LER characterization methodology for such patterns, which includes updated versions of the old metrics along with new metrics defined and developed to capture cross-edge and cross-line correlations. The fundamental concept has been the Line Center Roughness (LCR), the edge c-factor and the line c-factor correlation function and length quantifying the line fluctuations and the extent of cross-edge and cross-line correlations. In this work, we focus on the role of etch steps on cross-edge and line correlation metrics in SAQP data. We find that the spacer etch steps reduce edge correlations while etch steps with pattern transfer increase these. Furthermore, the density doubling and quadrupling increase edge correlations as well as cross-line correlations.
Surveillance of industrial processes with correlated parameters
White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.
1996-12-17
A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
Prevalence and Correlates of Sibling Victimization Types
ERIC Educational Resources Information Center
Tucker, Corinna Jenkins; Finkelhor, David; Shattuck, Anne M.; Turner, Heather
2013-01-01
Objective: The goal of this study was to document the prevalence and correlates of any past year sibling victimization, including physical, property, and psychological victimization, by a co-residing juvenile sibling across the spectrum of childhood from one month to 17 years of age. Methods: The National Survey of Children's Exposure to Violence…
Prevalence of Insomnia and Its Psychosocial Correlates among College Students in Hong Kong
ERIC Educational Resources Information Center
Sing, C. Y.; Wong, W. S.
2010-01-01
Objective: This study examined the prevalence of insomnia and its psychosocial correlates among college students in Hong Kong. Participants: A total of 529 Hong Kong college students participated in the study. Methods: Participants completed a self-reported questionnaire that included the Pittsburgh Sleep Quality Index (PSQI), the Revised Life…
ERIC Educational Resources Information Center
Haebig, Eileen; Leonard, Laurence; Usler, Evan; Deevy, Patricia; Weber, Christine
2018-01-01
Purpose: Previous behavioral studies have found deficits in lexical--semantic abilities in children with specific language impairment (SLI), including reduced depth and breadth of word knowledge. This study explored the neural correlates of early emerging familiar word processing in preschoolers with SLI and typical development. Method: Fifteen…
A Reliability Simulator for Radiation-Hard Microelectronics Development
1991-07-01
1 3.0 PHASE II WORK PLANS ................................................................ 2... plan . The correlation experimental details including the devices utilized, the hot-carrier stressing and the wafer-level radiation correlation procedure...channel devices, and a new lifetime extrapolation method is demonstrated for p-channel devices. 3.0 PHASE II WORK PLANS The Phase 1I program consisted of
Prevalence and Correlates of ADHD Symptoms in the National Health Interview Survey
ERIC Educational Resources Information Center
Cuffe, Steven P.; Moore, Charity G.; McKeown, Robert E.
2005-01-01
Objective: Study the prevalence and correlates of ADHD symptoms in the National Health Interview Survey (NHIS). Methods: NHIS includes 10,367 children ages 4 to 17. Parents report lifetime diagnosis of ADHD and complete the Strengths and Difficulties Questionnaire (SDQ). Prevalences of clinically significant ADHD and comorbid symptoms by race and…
Minghelli, Beatriz; Nunes, Carla; Oliveira, Raul
2013-01-01
Background: The recommended anthropometric methods to assess the weight status include body mass index (BMI), skinfold thickness, and waist circumference. However, these methods have advantages and disadvantages regarding the classification of overweight and obesity in adolescents. Aims: The study was to analyze the correlation between the measurements of BMI, skinfold thickness and waist circumference to assess overweight and obesity in Portuguese adolescents. Materials and Methods: A sample of 966 students of Portugal was used. Of them, 437 (45.2%) were males and 529 (54.8%) were females aged between 10 and 16 years. The evaluations included BMI calculation, skinfold thickness, and waist circumference measurements. Results: This study revealed a high prevalence of overweight and obesity with values ranging from 31.6%, 61.4%, and 41.1% according to the measurement of BMI, skinfold thickness, and waist circumference, respectively. The results found a high level of correlation between BMI and skinfold thickness (P < 0.001, r = 0.712), between BMI and waist circumference (P < 0.001, r = 0.884), and waist circumference and skinfold thickness (P < 0.001, r = 0.701). Conclusions: This study revealed a high prevalence of overweight and obesity in Portuguese adolescents using three different anthropometric methods, where the BMI showed the lowest values of prevalence of overweight and obesity and the skinfold thickness showed the highest values. The three anthropometric methods were highly correlated. PMID:24404544
Multicomponent density functional theory embedding formulation.
Culpitt, Tanner; Brorsen, Kurt R; Pak, Michael V; Hammes-Schiffer, Sharon
2016-07-28
Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density is separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF(-) molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.
Multicomponent density functional theory embedding formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Culpitt, Tanner; Brorsen, Kurt R.; Pak, Michael V.
Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density ismore » separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF{sup −} molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.« less
Liquid-cooling technology for gas turbines - Review and status
NASA Technical Reports Server (NTRS)
Van Fossen, G. J., Jr.; Stepka, F. S.
1978-01-01
After a brief review of past efforts involving the forced-convection cooling of gas turbines, the paper surveys the state of the art of the liquid cooling of gas turbines. Emphasis is placed on thermosyphon methods of cooling, including those utilizing closed, open, and closed-loop thermosyphons; other methods, including sweat, spray and stator cooling, are also discussed. The more significant research efforts, design data, correlations, and analytical methods are mentioned and voids in technology are summarized.
Suicide by shooting is correlated to rate of gun licenses in Austrian counties.
Etzersdorfer, Elmar; Kapusta, Nestor D; Sonneck, Gernot
2006-08-01
Shooting as method of suicide has increased considerably in Austria over recent decades and represented 23.5% of all suicides among men during the period 1990-2000. It is thought that the availability of guns could lead to their use in acts of suicide, and therefore we investigated the numbers of gun licenses (which constitutes ownership of guns and permission to carry a gun) in the nine Austrian counties and their correlation with suicides by shooting and other methods. We studied registered suicides, including the method used, between 1990 and 2000 in Austria and the numbers of gun licenses held in the nine counties of Austria in the same period. We found a strong correlation between the average gun license rate for the period 1990-2000 and suicides by shooting (r = 0.967), and only very weak correlation, and for some of the years under investigation a negative correlation, with other methods of committing suicide (r = 0.117) and the suicide rate in general (r = 0.383). As shooting as a method of suicide has increased in Austria in recent decades, and is a highly lethal method, the finding that the shooting suicide rate is related to the extent of gun ownership deserves attention, especially as there is evidence that restriction of gun ownership is an important factor in suicide prevention.
Wang, Luman; Mo, Qiaochu; Wang, Jianxin
2015-01-01
Most current gene coexpression databases support the analysis for linear correlation of gene pairs, but not nonlinear correlation of them, which hinders precisely evaluating the gene-gene coexpression strengths. Here, we report a new database, MIrExpress, which takes advantage of the information theory, as well as the Pearson linear correlation method, to measure the linear correlation, nonlinear correlation, and their hybrid of cell-specific gene coexpressions in immune cells. For a given gene pair or probe set pair input by web users, both mutual information (MI) and Pearson correlation coefficient (r) are calculated, and several corresponding values are reported to reflect their coexpression correlation nature, including MI and r values, their respective rank orderings, their rank comparison, and their hybrid correlation value. Furthermore, for a given gene, the top 10 most relevant genes to it are displayed with the MI, r, or their hybrid perspective, respectively. Currently, the database totally includes 16 human cell groups, involving 20,283 human genes. The expression data and the calculated correlation results from the database are interactively accessible on the web page and can be implemented for other related applications and researches. PMID:26881263
Wang, Luman; Mo, Qiaochu; Wang, Jianxin
2015-01-01
Most current gene coexpression databases support the analysis for linear correlation of gene pairs, but not nonlinear correlation of them, which hinders precisely evaluating the gene-gene coexpression strengths. Here, we report a new database, MIrExpress, which takes advantage of the information theory, as well as the Pearson linear correlation method, to measure the linear correlation, nonlinear correlation, and their hybrid of cell-specific gene coexpressions in immune cells. For a given gene pair or probe set pair input by web users, both mutual information (MI) and Pearson correlation coefficient (r) are calculated, and several corresponding values are reported to reflect their coexpression correlation nature, including MI and r values, their respective rank orderings, their rank comparison, and their hybrid correlation value. Furthermore, for a given gene, the top 10 most relevant genes to it are displayed with the MI, r, or their hybrid perspective, respectively. Currently, the database totally includes 16 human cell groups, involving 20,283 human genes. The expression data and the calculated correlation results from the database are interactively accessible on the web page and can be implemented for other related applications and researches.
Holman, Dawn M.; Watson, Meg
2015-01-01
Purpose Exposure to ultraviolet radiation and a history of sunburn in childhood contribute to risk of skin cancer in adolescence and in adulthood, but many adolescents continue to seek a tan, either from the sun or from tanning beds (i.e., intentional tanning). To understand tanning behavior among adolescents, we conducted a systematic review of the literature to identify correlates of intentional tanning in the United States. Methods We included articles on original research published in English between January 1, 2001, and October 31, 2011, that used self-reported data on intentional tanning by U.S. adolescents aged 8 to 18 years and examined potential correlates of tanning behaviors. Thirteen articles met our criteria; all used cross-sectional survey data and quantitative methods to assess correlates of intentional tanning. Results Results indicate that multiple factors influence tanning among adolescents. Individual factors that correlated with intentional tanning include demographic factors (female sex, older age), attitudes (preferring tanned skin), and behaviors (participating in other risky or appearance-focused behaviors such as dieting). Social factors correlated with intentional tanning include parental influence (having a parent who tans or permits tanning) and peer influence (having friends who tan). Only four studies examined broad contextual factors such as indoor tanning laws and geographic characteristics; they found that proximity to tanning facilities and geographic characteristics (living in the Midwest or South, living in a low ultraviolet area, and attending a rural high school) are associated with intentional tanning. Conclusions These findings inform future public health research and intervention efforts to reduce intentional tanning. PMID:23601612
Correlation between odour concentration and odour intensity from exposure to environmental odour
NASA Astrophysics Data System (ADS)
Yusoff, Syafinah; Qamaruz Zaman, Nastaein
2017-08-01
The encroachment of industries, agricultural activities and husbandries to the community area had been a major concern of late, especially in regards to the escalating reports of odour nuisances. A study was performed with the objective of establishing correlation between odour concentration and odour intensity, as an improved method to determine odour nuisances in the community. Universiti Sains Malaysia Engineering Campus was chosen as the study location, due to its vicinity to several odour sources including paper mill, palm oil mill and poultry farm. The odour survey was based on VDI 3940, to determine the level of odour intensity with the corresponding odour concentration measured using an infield olfactometer. The correlation between both methods shows a significant correlation by using Pearson Correlation with a level of confidence of 99.9 percent. The graph plotted between intensity and concentration shows the R2 value of 0.40 which indicated a good correlation between both methods, despite having a high variance and low in consistency. Therefore, this study concludes that the determination of odour concentration should be complemented with odour intensity in order to recognize the true impact of odour nuisance in a community.
NASA Astrophysics Data System (ADS)
Nelson, D. J.
2007-09-01
In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.
ERIC Educational Resources Information Center
Ramchandani, Dilip
2011-01-01
Background/Objective: The author analyzed and compared various assessment methods for assessment of medical students; these methods included clinical assessment and the standardized National Board of Medical Education (NBME) subject examination. Method: Students were evaluated on their 6-week clerkship in psychiatry by both their clinical…
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.
Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A
2016-03-01
Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onodera, Yasuhito; Bissell, Mina
Disclosed are methods in which glucose metabolism is correlated to oncogenesis through certain specific pathways; inhibition of certain enzymes is shown to interfere with oncogenic signaling, and measurement of certain enzyme levels is correlated with patient survival. The present methods comprise measuring level of expression of at least one of the enzymes involved in glucose uptake or metabolism, wherein increased expression of the at least one of the enzymes relative to expression in a normal cell correlates with poor prognosis of disease in a patient. Preferably the genes whose expression level is measured include GLUT3, PFKP, GAPDH, ALDOC, LDHA andmore » GFPT2. Also disclosed are embodiments directed towards downregulating the expression of some genes in glucose uptake and metabolism.« less
NASA Astrophysics Data System (ADS)
Bhatia, A. K.
2012-09-01
The P-wave hybrid theory of electron-hydrogen elastic scattering [Bhatia, Phys. Rev. A10.1103/PhysRevA.85.052708 85, 052708 (2012)] is applied to the P-wave scattering from He ion. In this method, both short-range and long-range correlations are included in the Schrödinger equation at the same time, by using a combination of a modified method of polarized orbitals and the optical potential formalism. The short-range-correlation functions are of Hylleraas type. It is found that the phase shifts are not significantly affected by the modification of the target function by a method similar to the method of polarized orbitals and they are close to the phase shifts calculated earlier by Bhatia [Phys. Rev. A10.1103/PhysRevA.69.032714 69, 032714 (2004)]. This indicates that the correlation function is general enough to include the target distortion (polarization) in the presence of the incident electron. The important fact is that in the present calculation, to obtain similar results only a 20-term correlation function is needed in the wave function compared to the 220-term wave function required in the above-mentioned calculation. Results for the phase shifts, obtained in the present hybrid formalism, are rigorous lower bounds to the exact phase shifts. The lowest P-wave resonances in He atom and hydrogen ion have also been calculated and compared with the results obtained using the Feshbach projection operator formalism [Bhatia and Temkin, Phys. Rev. A10.1103/PhysRevA.11.2018 11, 2018 (1975)] and also with the results of other calculations. It is concluded that accurate resonance parameters can be obtained by the present method, which has the advantage of including corrections due to neighboring resonances, bound states, and the continuum in which these resonances are embedded.
Matos, Erika; Jug, Borut; Vidergar Kralj, Barbara; Zakotnik, Branko
2017-06-01
Guidance on cardiac surveillance during adjuvant trastuzumab therapy remains elusive. The recommended methods are two-dimensional echocardiography (2D-ECHO) and electrocardiography gated equilibrium radionuclide ventriculography (RNV). We assessed the correlation and possible specific merits of these two methods. In a prospective cohort study in patients undergoing post-anthracycline adjuvant trastuzumab therapy, clinical assessment, 2D-ECHO and RNV were performed at baseline, 4, 8 and 12 months. The correlation between used methods was estimated with Pearson's correlation coefficient and Bland-Altman analysis. Ninety-two patients (mean age 53.6±9.0 years) were included. The correlation of LVEF measured by ECHO and RNV at each time point was statistically insignificant. Values obtained by ECHO were on average higher (3.7% to 4.5%). A decline in LVEF of ≥10% from baseline was noticed in 19 (24.4%) and 13 (14.9%) patients with ECHO and RNV, respectively, however in only one patient by both methods simultaneously. A decline in LVEF of ≥10% to below 50% was found in three and none patients according to RNV and ECHO measurements, respectively. There is a weak correlation of ECHO and RNV measurements in individual patient, the results obtained by the methods are not interchangeable. LVEF values determined by 2D-ECHO were on average higher compared to RNV determined ones. When in an asymptomatic patient a decline in LVEF requiring treatment interruption is detected by RNV ECHO re-evaluation and referral to a cardiologist is advised.
ERIC Educational Resources Information Center
Cacy, Roselynn; Smith, Polly
This unit contains lesson plans designed to teach first aid skills to adults with limited language skills. The lesson plans were developed, using the Laubach literacy method, for a workplace literacy project in Anchorage, Alaska. The lesson plans, which are correlated with the book, "You Can Give First Aid," include conversational skills…
A method for predicting the noise levels of coannular jets with inverted velocity profiles
NASA Technical Reports Server (NTRS)
Russell, J. W.
1979-01-01
A coannular jet was equated with a single stream equivalent jet with the same mass flow, energy, and thrust. The acoustic characteristics of the coannular jet were then related to the acoustic characteristics of the single jet. Forward flight effects were included by incorporating a forward exponent, a Doppler amplification factor, and a Strouhal frequency shift. Model test data, including 48 static cases and 22 wind tunnel cases, were used to evaluate the prediction method. For the static cases and the low forward velocity wind tunnel cases, the spectral mean square pressure correlation coefficients were generally greater than 90 percent, and the spectral sound pressure level standard deviation were generally less than 3 decibels. The correlation coefficient and the standard deviation were not affected by changes in equivalent jet velocity. Limitations of the prediction method are also presented.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Organizational culture associated with provider satisfaction
Scammon, Debra L.; Tabler, Jennifer; Brunisholz, Kimberly; Gren, Lisa H.; Kim, Jaewhan; Tomoaia-Cotisel, Andrada; Day, Julie; Farrell, Timothy W.; Waitzman, Norman J.; Magill, Michael K.
2014-01-01
Objectives Assess 1) provider satisfaction with specific elements of PCMH; 2) clinic organizational cultures; 3) associations between provider satisfaction and clinic culture. Methods Cross sectional study with surveys conducted in 2011 with providers and staff in 10 primary care clinics implementing their version of a PCMH: Care by Design™. Measures included the Organizational Culture Assessment Instrument (OCAI) and the American Medical Group Association provider satisfaction survey. Results Providers were most satisfied with quality of care (M=4.14; scale=1–5) and interactions with patients (M=4.12) and least satisfied with time spent working (M=3.47), paper work (M =3.45) and compensation (M=3.35). Culture profiles differed across clinics with family/clan and hierarchical the most common. Significant correlations (p ≤ 0.05) between provider satisfaction and clinic culture archetypes included: family/clan negatively correlated with administrative work; entrepreneurial positively correlated with the Time Spent Working dimension; market/rational positively correlated with how practices were facing economic and strategic challenges; and hierarchical negatively correlated with Relationships with Staff and Resource dimensions. Discussion Provider satisfaction is an important metric for assessing experiences with features of a PCMH model. Conclusions Identification of clinic-specific culture archetypes and archetype associations with provider satisfaction can help inform practice redesign. Attention to effective methods for changing organizational culture is recommended. PMID:24610184
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
Analysis of network clustering behavior of the Chinese stock market
NASA Astrophysics Data System (ADS)
Chen, Huan; Mai, Yong; Li, Sai-Ping
2014-11-01
Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.
Validity and reliability of the session-RPE method for quantifying training load in karate athletes.
Tabben, M; Tourny, C; Haddad, M; Chaabane, H; Chamari, K; Coquart, J B
2015-04-24
To test the construct validity and reliability of the session rating of perceived exertion (sRPE) method by examining the relationship between RPE and physiological parameters (heart rate: HR and blood lactate concentration: [La --] ) and the correlations between sRPE and two HR--based methods for quantifying internal training load (Banister's method and Edwards's method) during karate training camp. Eighteen elite karate athletes: ten men (age: 24.2 ± 2.3 y, body mass: 71.2 ± 9.0 kg, body fat: 8.2 ± 1.3% and height: 178 ± 7 cm) and eight women (age: 22.6 ± 1.2 y, body mass: 59.8 ± 8.4 kg, body fat: 20.2 ± 4.4%, height: 169 ± 4 cm) were included in the study. During training camp, subjects participated in eight karate--training sessions including three training modes (4 tactical--technical, 2 technical--development, and 2 randori training), during which RPE, HR, and [La -- ] were recorded. Significant correlations were found between RPE and physiological parameters (percentage of maximal HR: r = 0.75, 95% CI = 0.64--0.86; [La --] : r = 0.62, 95% CI = 0.49--0.75; P < 0.001). Moreover, individual sRPE was significantly correlated with two HR--based methods for quantifying internal training load ( r = 0.65--0.95; P < 0.001). The sRPE method showed the high reliability of the same intensity across training sessions (Cronbach's α = 0.81, 95% CI = 0.61--0.92). This study demonstrates that the sRPE method is valid for quantifying internal training load and intensity in karate.
Analysis strategies for longitudinal attachment loss data.
Beck, J D; Elter, J R
2000-02-01
The purpose of this invited review is to describe and discuss methods currently in use to quantify the progression of attachment loss in epidemiological studies of periodontal disease, and to make recommendations for specific analytic methods based upon the particular design of the study and structure of the data. The review concentrates on the definition of incident attachment loss (ALOSS) and its component parts; measurement issues including thresholds and regression to the mean; methods of accounting for longitudinal change, including changes in means, changes in proportions of affected sites, incidence density, the effect of tooth loss and reversals, and repeated events; statistical models of longitudinal change, including the incorporation of the time element, use of linear, logistic or Poisson regression or survival analysis, and statistical tests; site vs person level of analysis, including statistical adjustment for correlated data; the strengths and limitations of ALOSS data. Examples from the Piedmont 65+ Dental Study are used to illustrate specific concepts. We conclude that incidence density is the preferred methodology to use for periodontal studies with more than one period of follow-up and that the use of studies not employing methods for dealing with complex samples, correlated data, and repeated measures does not take advantage of our current understanding of the site- and person-level variables important in periodontal disease and may generate biased results.
Wagner, Glenn J; Woldetsadik, Mahlet A; Beyeza-Kashesya, Jolly; Goggin, Kathy; Mindry, Deborah; Finocchario-Kessler, Sarah; Khanakwa, Sarah; Wanyenze, Rhoda K
2016-03-01
Many people living with HIV desire childbearing, but low cost safer conception methods (SCM) such as timed unprotected intercourse (TUI) and manual self-insemination (MSI) are rarely used. We examined awareness and attitudes towards SCM, and the correlates of these constructs among 400 HIV clients with fertility intentions in Uganda. Measures included awareness, self-efficacy, and motivation regarding SCM, as well as demographics, health management, partner and provider characteristics. Just over half knew that MSI (53%) and TUI (51%) reduced transmission risk during conception, and 15% knew of sperm washing and pre-exposure prophylaxis. In separate regression models for SCM awareness, motivation, and self-efficacy, nearly all independent correlates were related to the partner, including perceived willingness to use SCM, knowledge of respondent's HIV status, HIV-seropositivity, marriage and equality in decision making within the relationship. These findings suggest the importance of partners in promoting SCM use and partner inclusion in safer conception counselling.
Dong, M C; van Vleck, L D
1989-03-01
Variance and covariance components for milk yield, survival to second freshening, calving interval in first lactation were estimated by REML with the expectation and maximization algorithm for an animal model which included herd-year-season effects. Cows without calving interval but with milk yield were included. Each of the four data sets of 15 herds included about 3000 Holstein cows. Relationships across herds were ignored to enable inversion of the coefficient matrix of mixed model equations. Quadratics and their expectations were accumulated herd by herd. Heritability of milk yield (.32) agrees with reports by same methods. Heritabilities of survival (.11) and calving interval(.15) are slightly larger and genetic correlations smaller than results from different methods of estimation. Genetic correlation between milk yield and calving interval (.09) indicates genetic ability to produce more milk is lightly associated with decreased fertility.
Application of abstract harmonic analysis to the high-speed recognition of images
NASA Technical Reports Server (NTRS)
Usikov, D. A.
1979-01-01
Methods are constructed for rapidly computing correlation functions using the theory of abstract harmonic analysis. The theory developed includes as a particular case the familiar Fourier transform method for a correlation function which makes it possible to find images which are independent of their translation in the plane. Two examples of the application of the general theory described are the search for images, independent of their rotation and scale, and the search for images which are independent of their translations and rotations in the plane.
WGCNA: an R package for weighted correlation network analysis.
Langfelder, Peter; Horvath, Steve
2008-12-29
Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.
WGCNA: an R package for weighted correlation network analysis
Langfelder, Peter; Horvath, Steve
2008-01-01
Background Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. Results The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. Conclusion The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at . PMID:19114008
Antifungal Susceptibility Testing of Fluconazole by Flow Cytometry Correlates with Clinical Outcome
Wenisch, Christoph; Moore, Caroline B.; Krause, Robert; Presterl, Elisabeth; Pichna, Peter; Denning, David W.
2001-01-01
Susceptibility testing of fungi by flow cytometry (also called fluorescence-activated cell sorting [FACS]) using vital staining with FUN-1 showed a good correlation with the standard M27-A procedure for assessing MICs. In this study we determined MICs for blood culture isolates from patients with candidemia by NCCLS M27-A and FACS methods and correlated the clinical outcome of these patients with in vitro antifungal resistance test results. A total of 24 patients with candidemia for whom one or more blood cultures were positive for a Candida sp. were included. Susceptibility testing was performed by NCCLS M27-A and FACS methods. The correlation of MICs (NCCLS M27-A and FACS) and clinical outcome was calculated. In 83% of the cases, the MICs of fluconazole determined by FACS were within 1 dilution of the MICs determined by the NCCLS M27-A method. For proposed susceptibility breakpoints, there was 100% agreement between the M27-A and FACS methods. In the FACS assay, a fluconazole MIC of <1 μg/ml was associated with cure (P < 0.001) whereas an MIC of ≥1 μg/ml was associated with death (P < 0.001). The M27-A-derived fluconazole MICs did not correlate with outcome (P = 1 and P = 0.133). PMID:11427554
Re-Evaluation of Event Correlations in Virtual California Using Statistical Analysis
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Heflin, M. B.; Granat, R. A.; Yikilmaz, M. B.; Heien, E.; Rundle, J.; Donnellan, A.
2010-12-01
Fusing the results of simulation tools with statistical analysis methods has contributed to our better understanding of the earthquake process. In a previous study, we used a statistical method to investigate emergent phenomena in data produced by the Virtual California earthquake simulator. The analysis indicated that there were some interesting fault interactions and possible triggering and quiescence relationships between events. We have converted the original code from Matlab to python/C++ and are now evaluating data from the most recent version of Virtual California in order to analyze and compare any new behavior exhibited by the model. The Virtual California earthquake simulator can be used to study fault and stress interaction scenarios for realistic California earthquakes. The simulation generates a synthetic earthquake catalog of events with a minimum size of ~M 5.8 that can be evaluated using statistical analysis methods. Virtual California utilizes realistic fault geometries and a simple Amontons - Coulomb stick and slip friction law in order to drive the earthquake process by means of a back-slip model where loading of each segment occurs due to the accumulation of a slip deficit at the prescribed slip rate of the segment. Like any complex system, Virtual California may generate emergent phenomena unexpected even by its designers. In order to investigate this, we have developed a statistical method that analyzes the interaction between Virtual California fault elements and thereby determine whether events on any given fault elements show correlated behavior. Our method examines events on one fault element and then determines whether there is an associated event within a specified time window on a second fault element. Note that an event in our analysis is defined as any time an element slips, rather than any particular “earthquake” along the entire fault length. Results are then tabulated and then differenced with an expected correlation, calculated by assuming a uniform distribution of events in time. We generate a correlation score matrix, which indicates how weakly or strongly correlated each fault element is to every other in the course of the VC simulation. We calculate correlation scores by summing the difference between the actual and expected correlations over all time window lengths and normalizing by the time window size. The correlation score matrix can focus attention on the most interesting areas for more in-depth analysis of event correlation vs. time. The previous study included 59 faults (639 elements) in the model, which included all the faults save the creeping section of the San Andreas. The analysis spanned 40,000 yrs of Virtual California-generated earthquake data. The newly revised VC model includes 70 faults, 8720 fault elements, and spans 110,000 years. Due to computational considerations, we will evaluate the elements comprising the southern California region, which our previous study indicated showed interesting fault interaction and event triggering/quiescence relationships.
Method for identifying type I diabetes mellitus in humans
Metz, Thomas O [Kennewick, WA; Qian, Weijun [Richland, WA; Jacobs, Jon M [Pasco, WA; Smith, Richard D [Richland, WA
2011-04-12
A method and system for classifying subject populations utilizing predictive and diagnostic biomarkers for type I diabetes mellitus. The method including determining the levels of a variety of markers within the serum or plasma of a target organism and correlating this level to general populations as a screen for predisposition or progressive monitoring of disease presence or predisposition.
Ordering of the O-O stretching vibrational frequencies in ozone
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III
1989-01-01
The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.
Molecular diagnosis of cystic fibrosis.
Shrimpton, Antony E
2002-05-01
A review of the current molecular diagnosis of cystic fibrosis including an introduction to cystic fibrosis, the gene function, the phenotypic variation, who should be screened for which mutation, newborn and couple screening, quality assurance, phenotype-genotype correlation, methods and method limitations, options, statements, recommendations, useful Websites and treatments.
Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.
Dai, Guoxian; Xie, Jin; Fang, Yi
2018-07-01
How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.
Consequence Assessment Methods for Incidents Involving Releases From Liquefied Natural Gas Carriers
2004-05-13
the downwind direction. The Thomas (1965) correlation is used to calculate flame length . Flame tilt is estimated using an empirical correlation from...follows: From TNO (1997) • Thomas (1963) correlation for flame length • For an experimental LNG pool fire of 16.8-m diameter, a mass burning flux of...m, flame length ranged from 50 to 78 m, and tilt angle from 27 to 35 degrees From Rew (1996) • Work included a review of recent developments in
One-year test-retest reliability of intrinsic connectivity network fMRI in older adults
Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.
2014-01-01
“Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491
Sly, Krystal L; Conboy, John C
2017-06-01
A novel application of second harmonic correlation spectroscopy (SHCS) for the direct determination of molecular adsorption and desorption kinetics to a surface is discussed in detail. The surface-specific nature of second harmonic generation (SHG) provides an efficient means to determine the kinetic rates of adsorption and desorption of molecular species to an interface without interference from bulk diffusion, which is a significant limitation of fluorescence correlation spectroscopy (FCS). The underlying principles of SHCS for the determination of surface binding kinetics are presented, including the role of optical coherence and optical heterodyne mixing. These properties of SHCS are extremely advantageous and lead to an increase in the signal-to-noise (S/N) of the correlation data, increasing the sensitivity of the technique. The influence of experimental parameters, including the uniformity of the TEM00 laser beam, the overall photon flux, and collection time are also discussed, and are shown to significantly affect the S/N of the correlation data. Second harmonic correlation spectroscopy is a powerful, surface-specific, and label-free alternative to other correlation spectroscopic methods for examining surface binding kinetics.
Out-of-plane ultrasonic velocity measurement
Hall, M.S.; Brodeur, P.H.; Jackson, T.G.
1998-07-14
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.
Out-of-plane ultrasonic velocity measurement
Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.
1998-01-01
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.
Hadamard multimode optical imaging transceiver
Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R
2012-10-30
Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.
Correlative Super-Resolution Microscopy: New Dimensions and New Opportunities.
Hauser, Meghan; Wojcik, Michal; Kim, Doory; Mahmoudi, Morteza; Li, Wan; Xu, Ke
2017-06-14
Correlative microscopy, the integration of two or more microscopy techniques performed on the same sample, produces results that emphasize the strengths of each technique while offsetting their individual weaknesses. Light microscopy has historically been a central method in correlative microscopy due to its widespread availability, compatibility with hydrated and live biological samples, and excellent molecular specificity through fluorescence labeling. However, conventional light microscopy can only achieve a resolution of ∼300 nm, undercutting its advantages in correlations with higher-resolution methods. The rise of super-resolution microscopy (SRM) over the past decade has drastically improved the resolution of light microscopy to ∼10 nm, thus creating exciting new opportunities and challenges for correlative microscopy. Here we review how these challenges are addressed to effectively correlate SRM with other microscopy techniques, including light microscopy, electron microscopy, cryomicroscopy, atomic force microscopy, and various forms of spectroscopy. Though we emphasize biological studies, we also discuss the application of correlative SRM to materials characterization and single-molecule reactions. Finally, we point out current limitations and discuss possible future improvements and advances. We thus demonstrate how a correlative approach adds new dimensions of information and provides new opportunities in the fast-growing field of SRM.
Almaqrami, Bushra-Sufyan; Alhammadi, Maged-Sultan
2018-01-01
Background The objective of this study was to analyse three dimensionally the reliability and correlation of angular and linear measurements in assessment of anteroposterior skeletal discrepancy. Material and Methods In this retrospective cross sectional study, a sample of 213 subjects were three-dimensionally analysed from cone-beam computed tomography scans. The sample was divided according to three dimensional measurement of anteroposterior relation (ANB angle) into three groups (skeletal Class I, Class II and Class III). The anterior-posterior cephalometric indicators were measured on volumetric images using Anatomage software (InVivo5.2). These measurements included three angular and seven linear measurements. Cross tabulations were performed to correlate the ANB angle with each method. Intra-class Correlation Coefficient (ICC) test was applied for the difference between the two reliability measurements. P value of < 0.05 was considered significant. Results There was a statistically significant (P<0.05) agreement between all methods used with variability in assessment of different anteroposterior relations. The highest correlation was between ANB and DSOJ (0.913), strong correlation with AB/FH, AB/SN/, MM bisector, AB/PP, Wits appraisal (0.896, 0.890, 0.878, 0.867,and 0.858, respectively), moderate with AD/SN and Beta angle (0.787 and 0.760), and weak correlation with corrected ANB angle (0.550). Conclusions Conjunctive usage of ANB angle with DSOJ, AB/FH, AB/SN/, MM bisector, AB/PP and Wits appraisal in 3D cephalometric analysis provide a more reliable and valid indicator of the skeletal anteroposterior relationship. Clinical relevance: Most of orthodontic literature depends on single method (ANB) with its drawbacks in assessment of skeletal discrepancy which is a cardinal factors for proper treatment planning, this study assessed three dimensionally the degree of correlation between all available methods to make clinical judgement more accurate based on more than one method of assessment. Key words:Anteroposterior relationships, ANB angle, Three-dimension, CBCT. PMID:29750096
A robust bayesian estimate of the concordance correlation coefficient.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.
Multifractal analysis of the Korean agricultural market
NASA Astrophysics Data System (ADS)
Kim, Hongseok; Oh, Gabjin; Kim, Seunghwan
2011-11-01
We have studied the long-term memory effects of the Korean agricultural market using the detrended fluctuation analysis (DFA) method. In general, the return time series of various financial data, including stock indices, foreign exchange rates, and commodity prices, are uncorrelated in time, while the volatility time series are strongly correlated. However, we found that the return time series of Korean agricultural commodity prices are anti-correlated in time, while the volatility time series are correlated. The n-point correlations of time series were also examined, and it was found that a multifractal structure exists in Korean agricultural market prices.
A non-linear regression method for CT brain perfusion analysis
NASA Astrophysics Data System (ADS)
Bennink, E.; Oosterbroek, J.; Viergever, M. A.; Velthuis, B. K.; de Jong, H. W. A. M.
2015-03-01
CT perfusion (CTP) imaging allows for rapid diagnosis of ischemic stroke. Generation of perfusion maps from CTP data usually involves deconvolution algorithms providing estimates for the impulse response function in the tissue. We propose the use of a fast non-linear regression (NLR) method that we postulate has similar performance to the current academic state-of-art method (bSVD), but that has some important advantages, including the estimation of vascular permeability, improved robustness to tracer-delay, and very few tuning parameters, that are all important in stroke assessment. The aim of this study is to evaluate the fast NLR method against bSVD and a commercial clinical state-of-art method. The three methods were tested against a published digital perfusion phantom earlier used to illustrate the superiority of bSVD. In addition, the NLR and clinical methods were also tested against bSVD on 20 clinical scans. Pearson correlation coefficients were calculated for each of the tested methods. All three methods showed high correlation coefficients (>0.9) with the ground truth in the phantom. With respect to the clinical scans, the NLR perfusion maps showed higher correlation with bSVD than the perfusion maps from the clinical method. Furthermore, the perfusion maps showed that the fast NLR estimates are robust to tracer-delay. In conclusion, the proposed fast NLR method provides a simple and flexible way of estimating perfusion parameters from CT perfusion scans, with high correlation coefficients. This suggests that it could be a better alternative to the current clinical and academic state-of-art methods.
Pairing phase diagram of three holes in the generalized Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, O.; Espinosa, J.E.
Investigations of high-{Tc} superconductors suggest that the electronic correlation may play a significant role in the formation of pairs. Although the main interest is on the physic of two-dimensional highly correlated electron systems, the one-dimensional models related to high temperature superconductivity are very popular due to the conjecture that properties of the 1D and 2D variants of certain models have common aspects. Within the models for correlated electron systems, that attempt to capture the essential physics of high-temperature superconductors and parent compounds, the Hubbard model is one of the simplest. Here, the pairing problem of a three electrons system hasmore » been studied by using a real-space method and the generalized Hubbard Hamiltonian. This method includes the correlated hopping interactions as an extension of the previously proposed mapping method, and is based on mapping the correlated many body problem onto an equivalent site- and bond-impurity tight-binding one in a higher dimensional space, where the problem was solved in a non-perturbative way. In a linear chain, the authors analyzed the pairing phase diagram of three correlated holes for different values of the Hamiltonian parameters. For some value of the hopping parameters they obtain an analytical solution for all kind of interactions.« less
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Nolan, Jim
2014-01-01
This paper suggests a novel clustering method for analyzing the National Incident-Based Reporting System (NIBRS) data, which include the determination of correlation of different crime types, the development of a likelihood index for crimes to occur in a jurisdiction, and the clustering of jurisdictions based on crime type. The method was tested by using the 2005 assault data from 121 jurisdictions in Virginia as a test case. The analyses of these data show that some different crime types are correlated and some different crime parameters are correlated with different crime types. The analyses also show that certain jurisdictions within Virginia share certain crime patterns. This information assists with constructing a pattern for a specific crime type and can be used to determine whether a jurisdiction may be more likely to see this type of crime occur in their area. PMID:24778585
Correlation Characterization of Particles in Volume Based on Peak-to-Basement Ratio
Vovk, Tatiana A.; Petrov, Nikolay V.
2017-01-01
We propose a new express method of the correlation characterization of the particles suspended in the volume of optically transparent medium. It utilizes inline digital holography technique for obtaining two images of the adjacent layers from the investigated volume with subsequent matching of the cross-correlation function peak-to-basement ratio calculated for these images. After preliminary calibration via numerical simulation, the proposed method allows one to quickly distinguish parameters of the particle distribution and evaluate their concentration. The experimental verification was carried out for the two types of physical suspensions. Our method can be applied in environmental and biological research, which includes analyzing tools in flow cytometry devices, express characterization of particles and biological cells in air and water media, and various technical tasks, e.g. the study of scattering objects or rapid determination of cutting tool conditions in mechanisms. PMID:28252020
The High School & Beyond Data Set: Academic Self-Concept Measures.
ERIC Educational Resources Information Center
Strein, William
A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…
Beyond Kohn-Sham Approximation: Hybrid Multistate Wave Function and Density Functional Theory.
Gao, Jiali; Grofe, Adam; Ren, Haisheng; Bao, Peng
2016-12-15
A multistate density functional theory (MSDFT) is presented in which the energies and densities for the ground and excited states are treated on the same footing using multiconfigurational approaches. The method can be applied to systems with strong correlation and to correctly describe the dimensionality of the conical intersections between strongly coupled dissociative potential energy surfaces. A dynamic-then-static framework for treating electron correlation is developed to first incorporate dynamic correlation into contracted state functions through block-localized Kohn-Sham density functional theory (KSDFT), followed by diagonalization of the effective Hamiltonian to include static correlation. MSDFT can be regarded as a hybrid of wave function and density functional theory. The method is built on and makes use of the current approximate density functional developed in KSDFT, yet it retains its computational efficiency to treat strongly correlated systems that are problematic for KSDFT but too large for accurate WFT. The results presented in this work show that MSDFT can be applied to photochemical processes involving conical intersections.
Long-term Follow-up of Acute Isolated Accommodation Insufficiency
Lee, Jung Jin; Baek, Seung-Hee
2013-01-01
Purpose To define the long-term results of accommodation insufficiency and to investigate the correlation between accommodation insufficiency and other factors including near point of convergence (NPC), age, and refractive errors. Methods From January 2008 to December 2009, 11 patients with acute near vision disturbance and remote near point of accommodation (NPA) were evaluated. Full ophthalmologic examinations, including best corrected visual acuity, manifest refraction and prism cover tests were performed. Accommodation ability was measured by NPA using the push-up method. We compared accommodation insufficiency and factors including age, refractive errors and NPC. We also investigated the recovery from loss of accommodation in patients. Results Mean age of patients was 20 years (range, 9 to 34 years). Five of the 11 patients were female. Mean refractive error was -0.6 diopters (range, -3.5 to +0.25 diopters) and 8 of 11 patients (73%) had emmetropia (+0.50 to -0.50 diopters). No abnormalities were found in brain imaging tests. Refractive errors were not correlated with NPA or NPC (rho = 0.148, p = 0.511; rho = 0.319, p = 0.339; respectively). The correlation between age and NPA was not significant (rho = -395, p = 0.069). However, the correlation between age and NPC was negative (rho = -0.508, p = 0.016). Three of 11 patients were lost to follow-up, and 6 of 8 patients had permanent insufficiency of accommodation. Conclusions Accommodation insufficiency is most common in emmetropia, however, refractive errors and age are not correlated with accommodation insufficiency. Dysfunction of accommodation can be permanent in the isolated accommodation insufficiency. PMID:23543051
Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.
Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.
Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix
Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185
Are peer specialists happy on the job?
Jenkins, Sarah; Chenneville, Tiffany; Salnaitis, Christina
2018-03-01
This study was designed to examine the impact of role clarity and job training on job satisfaction among peer specialists. A 3-part survey assessing job training, job satisfaction, and role clarity was administered online to 195 peer specialists who are members of the International Association of Peer Specialists. Data was analyzed using descriptive statistics, correlational analyses to include multiple linear regressions and analysis of variance. Self-study and online training methods were negatively correlated with job satisfaction while job shadowing was positively correlated with job satisfaction. Role clarity was positively correlated with job satisfaction and job training satisfaction as well as job shadowing and one-on-one training. The use of self-study and online training for peer specialists is contraindicated by current findings, which suggest the need to utilize job shadowing or training methods that allow for personal interaction between peer specialists and their colleagues. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The Index cohesive effect on stock market correlations
NASA Astrophysics Data System (ADS)
Shapira, Y.; Kenett, D. Y.; Ben-Jacob, E.
2009-12-01
We present empirical examination and reassessment of the functional role of the market Index, using datasets of stock returns for eight years, by analyzing and comparing the results for two very different markets: 1) the New York Stock Exchange (NYSE), representing a large, mature market, and 2) the Tel Aviv Stock Exchange (TASE), representing a small, young market. Our method includes special collective (holographic) analysis of stock-Index correlations, of nested stock correlations (including the Index as an additional ghost stock) and of bare stock correlations (after subtraction of the Index return from the stocks returns). Our findings verify and strongly substantiate the assumed functional role of the index in the financial system as a cohesive force between stocks, i.e., the correlations between stocks are largely due to the strong correlation between each stock and the Index (the adhesive effect), rather than inter-stock dependencies. The Index adhesive and cohesive effects on the market correlations in the two markets are presented and compared in a reduced 3-D principal component space of the correlation matrices (holographic presentation). The results provide new insights into the interplay between an index and its constituent stocks in TASE-like versus NYSE-like markets.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-07-23
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]'), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal.
Psychophysical Reverse Correlation with Multiple Response Alternatives
Dai, Huanping; Micheyl, Christophe
2011-01-01
Psychophysical reverse-correlation methods such as the “classification image” technique provide a unique tool to uncover the internal representations and decision strategies of individual participants in perceptual tasks. Over the last thirty years, these techniques have gained increasing popularity among both visual and auditory psychophysicists. However, thus far, principled applications of the psychophysical reverse-correlation approach have been almost exclusively limited to two-alternative decision (detection or discrimination) tasks. Whether and how reverse-correlation methods can be applied to uncover perceptual templates and decision strategies in situations involving more than just two response alternatives remains largely unclear. Here, the authors consider the problem of estimating perceptual templates and decision strategies in stimulus identification tasks with multiple response alternatives. They describe a modified correlational approach, which can be used to solve this problem. The approach is evaluated under a variety of simulated conditions, including different ratios of internal-to-external noise, different degrees of correlations between the sensory observations, and various statistical distributions of stimulus perturbations. The results indicate that the proposed approach is reasonably robust, suggesting that it could be used in future empirical studies. PMID:20695712
Functional modules by relating protein interaction networks and gene expression.
Tornow, Sabine; Mewes, H W
2003-11-01
Genes and proteins are organized on the basis of their particular mutual relations or according to their interactions in cellular and genetic networks. These include metabolic or signaling pathways and protein interaction, regulatory or co-expression networks. Integrating the information from the different types of networks may lead to the notion of a functional network and functional modules. To find these modules, we propose a new technique which is based on collective, multi-body correlations in a genetic network. We calculated the correlation strength of a group of genes (e.g. in the co-expression network) which were identified as members of a module in a different network (e.g. in the protein interaction network) and estimated the probability that this correlation strength was found by chance. Groups of genes with a significant correlation strength in different networks have a high probability that they perform the same function. Here, we propose evaluating the multi-body correlations by applying the superparamagnetic approach. We compare our method to the presently applied mean Pearson correlations and show that our method is more sensitive in revealing functional relationships.
Functional modules by relating protein interaction networks and gene expression
Tornow, Sabine; Mewes, H. W.
2003-01-01
Genes and proteins are organized on the basis of their particular mutual relations or according to their interactions in cellular and genetic networks. These include metabolic or signaling pathways and protein interaction, regulatory or co-expression networks. Integrating the information from the different types of networks may lead to the notion of a functional network and functional modules. To find these modules, we propose a new technique which is based on collective, multi-body correlations in a genetic network. We calculated the correlation strength of a group of genes (e.g. in the co-expression network) which were identified as members of a module in a different network (e.g. in the protein interaction network) and estimated the probability that this correlation strength was found by chance. Groups of genes with a significant correlation strength in different networks have a high probability that they perform the same function. Here, we propose evaluating the multi-body correlations by applying the superparamagnetic approach. We compare our method to the presently applied mean Pearson correlations and show that our method is more sensitive in revealing functional relationships. PMID:14576317
Noise reduction methods for nucleic acid and macromolecule sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuller, Ivan K.; Di Ventra, Massimiliano; Balatsky, Alexander
Methods, systems, and devices are disclosed for processing macromolecule sequencing data with substantial noise reduction. In one aspect, a method for reducing noise in a sequential measurement of a macromolecule comprising serial subunits includes cross-correlating multiple measured signals of a physical property of subunits of interest of the macromolecule, the multiple measured signals including the time data associated with the measurement of the signal, to remove or at least reduce signal noise that is not in the same frequency and in phase with the systematic signal contribution of the measured signals.
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
Two-particle correlation function and dihadron correlation approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vechernin, V. V., E-mail: v.vechernin@spbu.ru; Ivanov, K. O.; Neverov, D. I.
It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated inmore » rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.« less
Nucleon PDFs and TMDs from Continuum QCD
NASA Astrophysics Data System (ADS)
Bednar, Kyle; Cloet, Ian; Tandy, Peter
2017-09-01
The parton structure of the nucleon is investigated in an approach based upon QCD's Dyson-Schwinger equations. The method accommodates a variety of QCD's dynamical outcomes including: the running mass of quark propagators and formation of non-pointlike di-quark correlations. All needed elements, including the nucleon wave function solution from a Poincaré covariant Faddeev equation, are encoded in spectral-type representations in the Nakanishi style to facilitate Feynman integral procedures and allow insight into key underlying mechanisms. Results will be presented for spin-independent PDFs and TMDs arising from a truncation to allow only scalar di-quark correlations. The influence of axial-vector di-quark correlations may be discussed if results are available. Supported by NSF Grant No. PHY-1516138.
Recent advancement in the field of two-dimensional correlation spectroscopy
NASA Astrophysics Data System (ADS)
Noda, Isao
2008-07-01
The recent advancement in the field of 2D correlation spectroscopy is reviewed with the emphasis on a number of papers published during the last two years. Topics covered by this comprehensive review include books, review articles, and noteworthy developments in the theory and applications of 2D correlation spectroscopy. New 2D correlation techniques are discussed, such as kernel analysis and augmented 2D correlation, model-based correlation, moving window analysis, global phase angle, covariance and correlation coefficient mapping, sample-sample correlation, hybrid and hetero correlation, pretreatment and transformation of data, and 2D correlation combined with other chemometrics techniques. Perturbation methods of both static (e.g., temperature, composition, pressure and stress, spatial distribution and orientation) and dynamic types (e.g., rheo-optical and acoustic, chemical reactions and kinetics, H/D exchange, sorption and diffusion) currently in use are examined. Analytical techniques most commonly employed in 2D correlation spectroscopy are IR, Raman, and NIR, but the growing use of other probes is also noted, including fluorescence, emission, Raman optical activity and vibrational circular dichroism, X-ray absorption and scattering, NMR, mass spectrometry, and even chromatography. The field of applications for 2D correlation spectroscopy is very diverse, encompassing synthetic polymers, liquid crystals, Langmuir-Blodgett films, proteins and peptides, natural polymers and biomaterials, pharmaceuticals, food and agricultural products, water, solutions, inorganic, organic, hybrid or composite materials, and many more.
NASA Astrophysics Data System (ADS)
Hakim, Issa; Laquai, Rene; Walter, David; Mueller, Bernd; Graja, Paul; Meyendorf, Norbert; Donaldson, Steven
2017-02-01
Carbon fiber composites have been increasingly used in aerospace, military, sports, automotive and other fields due to their excellent properties, including high specific strength, high specific modulus, corrosion resistance, fatigue resistance, and low thermal expansion coefficient. Interlaminar fracture is a serious failure mode leading to a loss in composite stiffness and strength. Discontinuities formed during manufacturing process degrade the fatigue life and interlaminar fracture resistance of the composite. In his study, three approaches were implemented and their results were correlated to quantify discontinuities effecting static and fatigue interlaminar fracture behavior of carbon fiber composites. Samples were fabricated by hand layup vacuum bagging manufacturing process under three different vacuum levels, indicated High (-686 mmHg), Moderate (-330 mmHg) and Poor (0 mmHg). Discontinuity content was quantified through-thickness by destructive and nondestructive techniques. Eight different NDE methods were conducted including imaging NDE methods: X-Ray laminography, ultrasonic, high frequency eddy current, pulse thermography, pulse phase thermography and lock-in-thermography, and averaging NDE techniques: X-Ray refraction and thermal conductivity measurements. Samples were subsequently destructively serial sectioned through-thickness into several layers. Both static and fatigue interlaminar fracture behavior under Mode I were conducted. The results of several imaging NDE methods revealed the trend in percentages of discontinuity. However, the results of averaging NDE methods showed a clear correlation since they gave specific values of discontinuity through-thickness. Serial sectioning exposed the composite's internal structure and provided a very clear idea about the type, shape, size, distribution and location of most discontinuities included. The results of mechanical testing showed that discontinuities lead to a decrease in Mode I static interlaminar fracture toughness and a decrease in Mode I cyclic strain energy release rates fatigue life. Finally, all approaches were correlated: the resulted NDE percentages and parameters were correlated with the features revealed by the destructive test of serial sectioning and static and fatigue values in order to quantify discontinuities such as delamination and voids.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2018-02-01
The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.
ERIC Educational Resources Information Center
Bird, Hector R.; Davies, Mark; Duarte, Cristiane S.; Shen, Sa; Loeber, Rolf; Canino, Glorisa J.
2006-01-01
Objective: This is the second of two associated articles. The prevalence, correlates, and comorbidities of disruptive behavior disorders (DBDs) in two populations are reported. Method: Probability community samples of Puerto Rican boys and girls ages 5-13 years in San Juan, and the south Bronx in New York City are included (n = 2,491). The…
Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...
2016-06-07
The Magneli phase Ti 4O 7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation.more » Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less
NASA Astrophysics Data System (ADS)
Tansella, Vittorio; Bonvin, Camille; Durrer, Ruth; Ghosh, Basundhara; Sellentin, Elena
2018-03-01
We derive an exact expression for the correlation function in redshift shells including all the relativistic contributions. This expression, which does not rely on the distant-observer or flat-sky approximation, is valid at all scales and includes both local relativistic corrections and integrated contributions, like gravitational lensing. We present two methods to calculate this correlation function, one which makes use of the angular power spectrum Cl(z1,z2) and a second method which evades the costly calculations of the angular power spectra. The correlation function is then used to define the power spectrum as its Fourier transform. In this work theoretical aspects of this procedure are presented, together with quantitative examples. In particular, we show that gravitational lensing modifies the multipoles of the correlation function and of the power spectrum by a few percent at redshift z=1 and by up to 30% and more at z=2. We also point out that large-scale relativistic effects and wide-angle corrections generate contributions of the same order of magnitude and have consequently to be treated in conjunction. These corrections are particularly important at small redshift, z=0.1, where they can reach 10%. This means in particular that a flat-sky treatment of relativistic effects, using for example the power spectrum, is not consistent.
Statistics of baryon correlation functions in lattice QCD
NASA Astrophysics Data System (ADS)
Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration
2017-12-01
A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.
Generating Dynamic Persistence in the Time Domain
NASA Astrophysics Data System (ADS)
Guerrero, A.; Smith, L. A.; Smith, L. A.; Kaplan, D. T.
2001-12-01
Many dynamical systems present long-range correlations. Physically, these systems vary from biological to economical, including geological or urban systems. Important geophysical candidates for this type of behaviour include weather (or climate) and earthquake sequences. Persistence is characterised by slowly decaying correlation function; that, in theory, never dies out. The Persistence exponent reflects the degree of memory in the system and much effort has been expended creating and analysing methods that successfully estimate this parameter and model data that exhibits persistence. The most widely used methods for generating long correlated time series are not dynamical systems in the time domain, but instead are derived from a given spectral density. Little attention has been drawn to modelling persistence in the time domain. The time domain approach has the advantage that an observation at certain time can be calculated using previous observations which is particularly suitable when investigating the predictability of a long memory process. We will describe two of these methods in the time domain. One is a traditional approach using fractional ARIMA (autoregressive and moving average) models; the second uses a novel approach to extending a given series using random Fourier basis functions. The statistical quality of the two methods is compared, and they are contrasted with weather data which shows, reportedly, persistence. The suitability of this approach both for estimating predictability and for making predictions is discussed.
Generalized interferometry - I: theory for interstation correlations
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; Stehly, Laurent; Ermert, Laura; Boehm, Christian
2017-02-01
We develop a general theory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on Earth structure. Not making any restrictive assumptions on the nature of the wavefield sources, our theory can be applied to earthquake and ambient noise data, either separately or combined. This allows us (i) to locate earthquakes using interstation correlations and without knowledge of the origin time, (ii) to unify the earthquake-based two-station method and noise correlations without the need to exclude either of the two data types, and (iii) to eliminate the requirement to remove earthquake signals from noise recordings prior to the computation of correlation functions. In addition to the basic theory for acoustic wavefields, we present numerical examples for 2-D media, an extension to the most general viscoelastic case, and a method for the design of optimal processing schemes that eliminate the forward modelling error completely. This work is intended to provide a comprehensive theoretical foundation of full-waveform interferometry by correlation, and to suggest improvements to current passive monitoring methods.
Toward a Smartphone Application for Estimation of Pulse Transit Time
Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei
2015-01-01
Pulse transit time (PTT) is an important physiological parameter that directly correlates with the elasticity and compliance of vascular walls and variations in blood pressure. This paper presents a PTT estimation method based on photoplethysmographic imaging (PPGi). The method utilizes two opposing cameras for simultaneous acquisition of PPGi waveform signals from the index fingertip and the forehead temple. An algorithm for the detection of maxima and minima in PPGi signals was developed, which includes technology for interpolation of the real positions of these points. We compared our PTT measurements with those obtained from the current methodological standards. Statistical results indicate that the PTT measured by our proposed method exhibits a good correlation with the established method. The proposed method is especially suitable for implementation in dual-camera-smartphones, which could facilitate PTT measurement among populations affected by cardiac complications. PMID:26516861
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Toward a Smartphone Application for Estimation of Pulse Transit Time.
Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei
2015-10-27
Pulse transit time (PTT) is an important physiological parameter that directly correlates with the elasticity and compliance of vascular walls and variations in blood pressure. This paper presents a PTT estimation method based on photoplethysmographic imaging (PPGi). The method utilizes two opposing cameras for simultaneous acquisition of PPGi waveform signals from the index fingertip and the forehead temple. An algorithm for the detection of maxima and minima in PPGi signals was developed, which includes technology for interpolation of the real positions of these points. We compared our PTT measurements with those obtained from the current methodological standards. Statistical results indicate that the PTT measured by our proposed method exhibits a good correlation with the established method. The proposed method is especially suitable for implementation in dual-camera-smartphones, which could facilitate PTT measurement among populations affected by cardiac complications.
Rathouz, Paul J.; Van Hulle, Carol A.; Lee Rodgers, Joseph; Waldman, Irwin D.; Lahey, Benjamin B.
2009-01-01
Purcell (2002) proposed a bivariate biometric model for testing and quantifying the interaction between latent genetic influences and measured environments in the presence of gene-environment correlation. Purcell’s model extends the Cholesky model to include gene-environment interaction. We examine a number of closely-related alternative models that do not involve gene-environment interaction but which may fit the data as well Purcell’s model. Because failure to consider these alternatives could lead to spurious detection of gene-environment interaction, we propose alternative models for testing gene-environment interaction in the presence of gene-environment correlation, including one based on the correlated factors model. In addition, we note mathematical errors in the calculation of effect size via variance components in Purcell’s model. We propose a statistical method for deriving and interpreting variance decompositions that are true to the fitted model. PMID:18293078
Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers
NASA Technical Reports Server (NTRS)
Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.
2012-01-01
Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
No correlation between stroke specialty and rate of shoulder pain in NCAA men swimmers
Wymore, Lucas; Reeve, Robert E.; Chaput, Christopher D.
2012-01-01
Purpose: To established an association between shoulder pain and the stroke specialization among NCAA men swimmers. Materials and Methods: All members of the top 25 NCAA men's swim teams were invited to complete the survey. Eleven teams with a total of 187 participants completed the study survey. The teams were mailed surveys that included multiple choice questions regarding their primary stroke and their incidence of shoulder pain. Additionally, the survey included questions about risk factors including distance trained, type of equipment, weight training, and stretching. Results: The analysis showed that there was no significant difference in the rates of shoulder pain among the four strokes and individual medley specialists. The other risk factors did not show a significant correlation with shoulder pain. Conclusions: This study found no significant correlation between stroke specialty and shoulder pain in male collegiate swimmers. Level of Evidence: Level 3. Clinical Relevance: Descriptive epidemiology study. PMID:23204760
NASA Astrophysics Data System (ADS)
Godfrey-Kittle, Andrew; Cafiero, Mauricio
We present density functional theory (DFT) interaction energies for the sandwich and T-shaped conformers of substituted benzene dimers. The DFT functionals studied include TPSS, HCTH407, B3LYP, and X3LYP. We also include Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory calculations (MP2), as well as calculations using a new functional, P3LYP, which includes PBE and HF exchange and LYP correlation. Although DFT methods do not explicitly account for the dispersion interactions important in the benzene-dimer interactions, we find that our new method, P3LYP, as well as HCTH407 and TPSS, match MP2 and CCSD(T) calculations much better than the hybrid methods B3LYP and X3LYP methods do.
Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.
Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori
2014-06-14
Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.
A comparison of 2 methods of endoscopic laryngeal sensory testing: a preliminary study.
Kaneoka, Asako; Krisciunas, Gintas P; Walsh, Kayo; Raade, Adele S; Langmore, Susan E
2015-03-01
This study examined the association between laryngeal sensory deficits and penetration or aspiration. Two methods of testing laryngeal sensation were carried out to determine which was more highly correlated with Penetration-Aspiration Scale (PAS) scores. Healthy participants and patients with dysphagia received an endoscopic swallowing evaluation including 2 sequential laryngeal sensory tests-air pulse followed by touch method. Normal/impaired responses were correlated with PAS scores. Fourteen participants completed the endoscopic swallowing evaluation and both sensory tests. The air pulse method identified sensory impairment with greater frequency than the touch method (P<.0001). However, the impairment identified by the air pulse method was not associated with abnormal PAS scores (P=.46). The sensory deficits identified by the touch method were associated with abnormal PAS scores (P=.05). Sensory impairment detected by the air pulse method does not appear to be associated with risk of penetration/aspiration. Significant laryngeal sensory loss revealed by the touch method is associated with compromised airway protection. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Casasent, D.
1978-01-01
The article discusses several optical configurations used for signal processing. Electronic-to-optical transducers are outlined, noting fixed window transducers and moving window acousto-optic transducers. Folded spectrum techniques are considered, with reference to wideband RF signal analysis, fetal electroencephalogram analysis, engine vibration analysis, signal buried in noise, and spatial filtering. Various methods for radar signal processing are described, such as phased-array antennas, the optical processing of phased-array data, pulsed Doppler and FM radar systems, a multichannel one-dimensional optical correlator, correlations with long coded waveforms, and Doppler signal processing. Means for noncoherent optical signal processing are noted, including an optical correlator for speech recognition and a noncoherent optical correlator.
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
Leak detection using structure-borne noise
NASA Technical Reports Server (NTRS)
Holland, Stephen D. (Inventor); Roberts, Ronald A. (Inventor); Chimenti, Dale E. (Inventor)
2010-01-01
A method for detection and location of air leaks in a pressure vessel, such as a spacecraft, includes sensing structure-borne ultrasound waveforms associated with turbulence caused by a leak from a plurality of sensors and cross correlating the waveforms to determine existence and location of the leak. Different configurations of sensors and corresponding methods can be used. An apparatus for performing the methods is also provided.
Short-range correlation in high-momentum antisymmetrized molecular dynamics
NASA Astrophysics Data System (ADS)
Myo, Takayuki
2018-03-01
We propose a new variational method for treating short-range repulsion of bare nuclear force for nuclei in antisymmetrized molecular dynamics (AMD). In AMD, the short-range correlation is described in terms of large imaginary centroids of Gaussian wave packets of nucleon pairs in opposite signs, causing high-momentum components in the nucleon pairs. We superpose these AMD basis states and call this method "high-momentum AMD" (HM-AMD), which is capable of describing the strong tensor correlation [T. Myo et al., Prog. Theor. Exp. Phys., 2017, 111D01 (2017)]. In this letter, we extend HM-AMD by including up to two kinds of nucleon pairs in each AMD basis state utilizing the cluster expansion, which produces many-body correlations involving high-momentum components. We investigate how well HM-AMD describes the short-range correlation by showing the results for ^3H using the Argonne V4^' central potential. It is found that HM-AMD reproduces the results of few-body calculations and also the tensor-optimized AMD. This means that HM-AMD is a powerful approach to describe the short-range correlation in nuclei. In HM-AMD, the momentum directions of nucleon pairs isotropically contribute to the short-range correlation, which is different from the tensor correlation.
Entanglement, nonlocality and multi-particle quantum correlations
NASA Astrophysics Data System (ADS)
Reid, Margaret D.
2018-04-01
This paper contributes to the proceedings of the Latin-American School of Physics (ELAF-2017) on Quantum Correlations, and is a brief review of quantum entanglement and nonlocality. In such a brief review, only some topics can be covered. The emphasis is on those topics relevant that may be relevant to detecting multi-particle quantum correlations arising in atomic and Bose-Einstein condensate (BEC) experiments. The paper is divided into five sections. In the first section, the historical papers of Einstein-Podolsky-Rosen (EPR), Bell, Schrodinger and Greenberger-Zeilinger-Horne (GHZ) are described in a tutorial fashion. This is followed by an introduction to entanglement and density operators. A discussion of the classes of nonlocality is given in the third section, including the modern interpretation of the correlations of the EPR paradox experiments, known as EPR steering correlations. The fourth section covers the detection and generation of so-called continuous variable entanglement and EPR steering. Various known criteria are derived with the details of the proofs given for tutorial purposes. The final section focuses on the criteria and methods that have been useful to detect quantum correlation in BEC or atomic systems. Recent results relating spin squeezing with quantum correlations, including entanglement and EPR steering, are summarised.
A new generation of effective core potentials for correlated calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Michael Chandler; Melton, Cody A.; Annaberdiyev, Abdulgani
Here, we outline ideas on desired properties for a new generation of effective core potentials (ECPs) that will allow valence-only calculations to reach the full potential offered by recent advances in many-body wave function methods. The key improvements include consistent use of correlated methods throughout ECP constructions and improved transferability as required for an accurate description of molecular systems over a range of geometries. The guiding principle is the isospectrality of all-electron and ECP Hamiltonians for a subset of valence states. We illustrate these concepts on a few first- and second-row atoms (B, C, N, O, S), and we obtainmore » higher accuracy in transferability than previous constructions while using semi-local ECPs with a small number of parameters. In addition, the constructed ECPs enable many-body calculations of valence properties with higher (or same) accuracy than their all-electron counterparts with uncorrelated cores. This implies that the ECPs include also some of the impacts of core-core and core-valence correlations on valence properties. The results open further prospects for ECP improvements and refinements.« less
A new generation of effective core potentials for correlated calculations
Bennett, Michael Chandler; Melton, Cody A.; Annaberdiyev, Abdulgani; ...
2017-12-12
Here, we outline ideas on desired properties for a new generation of effective core potentials (ECPs) that will allow valence-only calculations to reach the full potential offered by recent advances in many-body wave function methods. The key improvements include consistent use of correlated methods throughout ECP constructions and improved transferability as required for an accurate description of molecular systems over a range of geometries. The guiding principle is the isospectrality of all-electron and ECP Hamiltonians for a subset of valence states. We illustrate these concepts on a few first- and second-row atoms (B, C, N, O, S), and we obtainmore » higher accuracy in transferability than previous constructions while using semi-local ECPs with a small number of parameters. In addition, the constructed ECPs enable many-body calculations of valence properties with higher (or same) accuracy than their all-electron counterparts with uncorrelated cores. This implies that the ECPs include also some of the impacts of core-core and core-valence correlations on valence properties. The results open further prospects for ECP improvements and refinements.« less
Comparison and evaluation of fusion methods used for GF-2 satellite image in coastal mangrove area
NASA Astrophysics Data System (ADS)
Ling, Chengxing; Ju, Hongbo; Liu, Hua; Zhang, Huaiqing; Sun, Hua
2018-04-01
GF-2 satellite is the highest spatial resolution Remote Sensing Satellite of the development history of China's satellite. In this study, three traditional fusion methods including Brovey, Gram-Schmidt and Color Normalized (CN were used to compare with the other new fusion method NNDiffuse, which used the qualitative assessment and quantitative fusion quality index, including information entropy, variance, mean gradient, deviation index, spectral correlation coefficient. Analysis results show that NNDiffuse method presented the optimum in qualitative and quantitative analysis. It had more effective for the follow up of remote sensing information extraction and forest, wetland resources monitoring applications.
Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An
2016-05-01
To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P < 0.05) with ROI sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.
Shi, Qiong-bin; Zhao, Xiu-lan; Chang, Tong-ju; Lu, Ji-wen
2016-05-15
A long-term experiment was utilized to study the effects of tillage methods on the contents and distribution characteristics of organic matter and heavy metals (Cu, Zn, Pb, Cd, Fe and Mn) in aggregates with different sizes (including 1-2, 0.25-1, 0.05-0.25 mm and < 0.05 mm) in a purple paddy soil under two tillage methods including flooded paddy field (FPF) and paddy-upland rotation (PR). The relationship between heavy metals and organic matter in soil aggregates was also analyzed. The results showed that the aggregates of two tillage methods were dominated by 0.05-0.25 mm and < 0.05 mm particle size, respectively. The contents of organic matter in each aggregate decreased with the decrease of aggregate sizes, however, compared to PR, FPF could significantly increase the contents of organic matter in soils and aggregates. The tillage methods did not significantly affect the contents of heavy metals in soils, but FPF could enhance the accumulation and distribution of aggregate, organic matter and heavy metals in aggregates with diameters of 1-2 mm and 0.25-1 mm. Correlation analysis found that there was a negative correlation between the contents of heavy metals and organic matter in soil aggregates, but a positive correlation between the amounts of heavy metal and organic matter accumulated in soil aggregates. From the slope of the correlation analysis equations, we could found that the sensitivities of heavy metals to the changes of soil organic matters followed the order of Mn > Zn > Pb > Cu > Fe > Cd under the same tillage. When it came to the same heavy metal, it was more sensitive in PR than in FPF.
ERIC Educational Resources Information Center
Smith, Polly; King, Richard
This packet contains four sets of lesson plans designed for the workplace curriculum for housekeeping employees at the Sheraton Anchorage Hotel (Anchorage, Alaska), as part of the Anchorage Workplace Literacy Program. The lesson plans, which are correlated with Laubach literacy method skills books levels 1-3, include conversation (dialogue,…
A Bayesian method for detecting pairwise associations in compositional data
Ventz, Steffen; Huttenhower, Curtis
2017-01-01
Compositional data consist of vectors of proportions normalized to a constant sum from a basis of unobserved counts. The sum constraint makes inference on correlations between unconstrained features challenging due to the information loss from normalization. However, such correlations are of long-standing interest in fields including ecology. We propose a novel Bayesian framework (BAnOCC: Bayesian Analysis of Compositional Covariance) to estimate a sparse precision matrix through a LASSO prior. The resulting posterior, generated by MCMC sampling, allows uncertainty quantification of any function of the precision matrix, including the correlation matrix. We also use a first-order Taylor expansion to approximate the transformation from the unobserved counts to the composition in order to investigate what characteristics of the unobserved counts can make the correlations more or less difficult to infer. On simulated datasets, we show that BAnOCC infers the true network as well as previous methods while offering the advantage of posterior inference. Larger and more realistic simulated datasets further showed that BAnOCC performs well as measured by type I and type II error rates. Finally, we apply BAnOCC to a microbial ecology dataset from the Human Microbiome Project, which in addition to reproducing established ecological results revealed unique, competition-based roles for Proteobacteria in multiple distinct habitats. PMID:29140991
Allowable SEM noise for unbiased LER measurement
NASA Astrophysics Data System (ADS)
Papavieros, George; Constantoudis, Vassilios; Gogolides, Evangelos
2018-03-01
Recently, a novel method for the calculation of unbiased Line Edge Roughness based on Power Spectral Density analysis has been proposed. In this paper first an alternative method is discussed and investigated, utilizing the Height-Height Correlation Function (HHCF) of edges. The HHCF-based method enables the unbiased determination of the whole triplet of LER parameters including besides rms the correlation length and roughness exponent. The key of both methods is the sensitivity of PSD and HHCF on noise at high frequencies and short distance respectively. Secondly, we elaborate a testbed of synthesized SEM images with controlled LER and noise to justify the effectiveness of the proposed unbiased methods. Our main objective is to find out the boundaries of the method in respect to noise levels and roughness characteristics, for which the method remains reliable, i.e the maximum amount of noise allowed, for which the output results cope with the controllable known inputs. At the same time, we will also set the extremes of roughness parameters for which the methods hold their accuracy.
The Challenges of Measuring Glycemic Variability
Rodbard, David
2012-01-01
This commentary reviews several of the challenges encountered when attempting to quantify glycemic variability and correlate it with risk of diabetes complications. These challenges include (1) immaturity of the field, including problems of data accuracy, precision, reliability, cost, and availability; (2) larger relative error in the estimates of glycemic variability than in the estimates of the mean glucose; (3) high correlation between glycemic variability and mean glucose level; (4) multiplicity of measures; (5) correlation of the multiple measures; (6) duplication or reinvention of methods; (7) confusion of measures of glycemic variability with measures of quality of glycemic control; (8) the problem of multiple comparisons when assessing relationships among multiple measures of variability and multiple clinical end points; and (9) differing needs for routine clinical practice and clinical research applications. PMID:22768904
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.
1993-01-01
The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Jeraj, R; Galavis, P
Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less
Momtaz, Hossein-Emad; Dehghan, Arash; Karimian, Mohammad
2016-01-01
The use of a simple and accurate glomerular filtration rate (GFR) estimating method aiming minute assessment of renal function can be of great clinical importance. This study aimed to determine the association of a GFR estimating by equation that includes only cystatin C (Gentian equation) to equation that include only creatinine (Schwartz equation) among children. A total of 31 children aged from 1 day to 5 years with the final diagnosis of unilateral or bilateral hydronephrosis referred to Besat hospital in Hamadan, between March 2010 and February 2011 were consecutively enrolled. Schwartz and Gentian equations were employed to determine GFR based on plasma creatinine and cystatin C levels, respectively. The proportion of GFR based on Schwartz equation was 70.19± 24.86 ml/min/1.73 m(2), while the level of this parameter based on Gentian method and using cystatin C was 86.97 ± 21.57 ml/min/1.73 m(2). The Pearson correlation coefficient analysis showed a strong direct association between the two levels of GFR measured by Schwartz equation based on serum creatinine level and Gentian method and using cystatin C (r = 0.594, P < 0.001). The linear association between GFR values measured with the two methods included cystatin C based GFR = 50.8+ 0.515 × Schwartz GFR. The correlation between GFR values measured by using serum creatinine and serum cystatin C measurements remained meaningful even after adjustment for patients' gender and age (r = 0.724, P < 0.001). The equation developed based on cystatin C level is comparable with another equation, based on serum creatinine (Schwartz formula) to estimate GFR in children.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-01-01
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]′), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal. PMID:26213935
Woods, Lucy A; Dolezal, Olan; Ren, Bin; Ryan, John H; Peat, Thomas S; Poulsen, Sally-Ann
2016-03-10
Fragment-based drug discovery (FBDD) is contingent on the development of analytical methods to identify weak protein-fragment noncovalent interactions. Herein we have combined an underutilized fragment screening method, native state mass spectrometry, together with two proven and popular fragment screening methods, surface plasmon resonance and X-ray crystallography, in a fragment screening campaign against human carbonic anhydrase II (CA II). In an initial fragment screen against a 720-member fragment library (the "CSIRO Fragment Library") seven CA II binding fragments, including a selection of nonclassical CA II binding chemotypes, were identified. A further 70 compounds that comprised the initial hit chemotypes were subsequently sourced from the full CSIRO compound collection and screened. The fragment results were extremely well correlated across the three methods. Our findings demonstrate that there is a tremendous opportunity to apply native state mass spectrometry as a complementary fragment screening method to accelerate drug discovery.
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Lee, Timothy J.; Rendell, Alistair P.
1990-01-01
The recently proposed quadratic configuration interaction (QCI) method is compared with the more rigorous coupled cluster (CC) approach for a variety of chemical systems. Some of these systems are well represented by a single-determinant reference function and others are not. The finite order singles and doubles correlation energy, the perturbational triples correlation energy, and a recently devised diagnostic for estimating the importance of multireference effects are considered. The spectroscopic constants of CuH, the equilibrium structure of cis-(NO)2 and the binding energies of Be3, Be4, Mg3, and Mg4 were calculated using both approaches. The diagnostic for estimating multireference character clearly demonstrates that the QCI method becomes less satisfactory than the CC approach as non-dynamical correlation becomes more important, in agreement with a perturbational analysis of the two methods and the numerical estimates of the triple excitation energies they yield. The results for CuH show that the differences between the two methods become more apparent as the chemical systems under investigation becomes more multireference in nature and the QCI results consequently become less reliable. Nonetheless, when the system of interest is dominated by a single reference determinant both QCI and CC give very similar results.
A correlated meta-analysis strategy for data mining "OMIC" scans.
Province, Michael A; Borecki, Ingrid B
2013-01-01
Meta-analysis is becoming an increasingly popular and powerful tool to integrate findings across studies and OMIC dimensions. But there is the danger that hidden dependencies between putatively "independent" studies can cause inflation of type I error, due to reinforcement of the evidence from false-positive findings. We present here a simple method for conducting meta-analyses that automatically estimates the degree of any such non-independence between OMIC scans and corrects the inference for it, retaining the proper type I error structure. The method does not require the original data from the source studies, but operates only on summary analysis results from these in OMIC scans. The method is applicable in a wide variety of situations including combining GWAS and or sequencing scan results across studies with dependencies due to overlapping subjects, as well as to scans of correlated traits, in a meta-analysis scan for pleiotropic genetic effects. The method correctly detects which scans are actually independent in which case it yields the traditional meta-analysis, so it may safely be used in all cases, when there is even a suspicion of correlation amongst scans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neilson, Hilding R.; Lester, John B.; Baron, Fabien
2016-10-20
One of the great challenges of understanding stars is measuring their masses. The best methods for measuring stellar masses include binary interaction, asteroseismology, and stellar evolution models, but these methods are not ideal for red giant and supergiant stars. In this work, we propose a novel method for inferring stellar masses of evolved red giant and supergiant stars using interferometric and spectrophotometric observations combined with spherical model stellar atmospheres to measure what we call the stellar mass index, defined as the ratio between the stellar radius and mass. The method is based on the correlation between different measurements of angularmore » diameter, used as a proxy for atmospheric extension, and fundamental stellar parameters. For a given star, spectrophotometry measures the Rosseland angular diameter while interferometric observations generally probe a larger limb-darkened angular diameter. The ratio of these two angular diameters is proportional to the relative extension of the stellar atmosphere, which is strongly correlated to the star’s effective temperature, radius, and mass. We show that these correlations are strong and can lead to precise measurements of stellar masses.« less
Boron Content of Some Foods Consumed in Istanbul, Turkey.
Kuru, Ruya; Yilmaz, Sahin; Tasli, Pakize Neslihan; Yarat, Aysen; Sahin, Fikrettin
2018-04-14
The boron content was determined in 42 different foods consumed in Istanbul, Turkey. Eleven species of fruit, ten species of vegetable, eight species of food of animal origin, four species of grain, two species of nuts, two species of legume, and five other kinds of foods were included to this study. They were analyzed by two methods: Inductively coupled plasma mass spectrometry (ICP-MS) technique and carminic acid assay, and the results of two methods were also compared. Boron concentration in foods ranged between 0.06-37.2 mg/kg. Nuts had the highest boron content while foods of animal origin had the lowest. A strong correlation was found between the results of the carminic acid assay and the ICP-MS technique (p = 0.0001, Pearson correlation coefficient: r = 0.956). Bland Altman analysis also supported this correlation. ICP-MS is one of the most common, reliable, and powerful method for boron determination. The results of our study show that spectrophotometric carminic acid assay can provide similar results to ICP-MS, and the boron content in food materials can be also determined by spectrophotometric method.
A geostatistical state-space model of animal densities for stream networks.
Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H
2018-06-21
Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.
2010-12-01
One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher-order correlation and then used to compute highly accurate spectroscopic data for all isotopologues. Agreement with high-resolution experiment for 14N2H+ and 14N2D+ was excellent, but for 14N16O2+ agreement for the two stretching fundamentals is outside the expected residual uncertainty in the theoretical values, and it is concluded that there is an error in the experimental quantities. It is hoped that the highly accurate spectroscopic data presented for the minor isotopologues of N2H+ and NO2+ will be useful in the interpretation of future laboratory or astronomical observations.
NASA Astrophysics Data System (ADS)
Cremer, Dieter
The electron correlation effects covered by density functional theory (DFT) can be assessed qualitatively by comparing DFT densities ρ(r) with suitable reference densities obtained with wavefunction theory (WFT) methods that cover typical electron correlation effects. The analysis of difference densities ρ(DFT)-ρ(WFT) reveals that LDA and GGA exchange (X) functionals mimic non-dynamic correlation effects in an unspecified way. It is shown that these long range correlation effects are caused by the self-interaction error (SIE) of standard X functionals. Self-interaction corrected (SIC) DFT exchange gives, similar to exact exchange, for the bonding region a delocalized exchange hole, and does not cover any correlation effects. Hence, the exchange SIE is responsible for the fact that DFT densities often resemble MP4 or MP2 densities. The correlation functional changes X-only DFT densities in a manner observed when higher order coupling effects between lower order N-electron correlation effects are included. Hybrid functionals lead to changes in the density similar to those caused by SICDFT, which simply reflects the fact that hybrid functionals have been developed to cover part of the SIE and its long range correlation effects in a balanced manner. In the case of spin-unrestricted DFT (UDFT), non-dynamic electron correlation effects enter the calculation both via the X functional and via the wavefunction, which may cause a double-counting of correlation effects. The use of UDFT in the form of permuted orbital and broken-symmetry DFT (PO-UDFT, BS-UDFT) can lead to reasonable descriptions of multireference systems provided certain conditions are fulfilled. More reliable, however, is a combination of DFT and WFT methods, which makes the routine description of multireference systems possible. The development of such methods implies a separation of dynamic and non-dynamic correlation effects. Strategies for accomplishing this goal are discussed in general and tested in practice for CAS (complete active space)-DFT.
Strongly correlated materials.
Morosan, Emilia; Natelson, Douglas; Nevidomskyy, Andriy H; Si, Qimiao
2012-09-18
Strongly correlated materials are profoundly affected by the repulsive electron-electron interaction. This stands in contrast to many commonly used materials such as silicon and aluminum, whose properties are comparatively unaffected by the Coulomb repulsion. Correlated materials often have remarkable properties and transitions between distinct, competing phases with dramatically different electronic and magnetic orders. These rich phenomena are fascinating from the basic science perspective and offer possibilities for technological applications. This article looks at these materials through the lens of research performed at Rice University. Topics examined include: Quantum phase transitions and quantum criticality in "heavy fermion" materials and the iron pnictide high temperature superconductors; computational ab initio methods to examine strongly correlated materials and their interface with analytical theory techniques; layered dichalcogenides as example correlated materials with rich phases (charge density waves, superconductivity, hard ferromagnetism) that may be tuned by composition, pressure, and magnetic field; and nanostructure methods applied to the correlated oxides VO₂ and Fe₃O₄, where metal-insulator transitions can be manipulated by doping at the nanoscale or driving the system out of equilibrium. We conclude with a discussion of the exciting prospects for this class of materials. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hafdahl, Adam R; Williams, Michelle A
2009-03-01
In 2 Monte Carlo studies of fixed- and random-effects meta-analysis for correlations, A. P. Field (2001) ostensibly evaluated Hedges-Olkin-Vevea Fisher-z and Schmidt-Hunter Pearson-r estimators and tests in 120 conditions. Some authors have cited those results as evidence not to meta-analyze Fisher-z correlations, especially with heterogeneous correlation parameters. The present attempt to replicate Field's simulations included comparisons with analytic values as well as results for efficiency and confidence-interval coverage. Field's results under homogeneity were mostly replicable, but those under heterogeneity were not: The latter exhibited up to over .17 more bias than ours and, for tests of the mean correlation and homogeneity, respectively, nonnull rejection rates up to .60 lower and .65 higher. Changes to Field's observations and conclusions are recommended, and practical guidance is offered regarding simulation evidence and choices among methods. Most cautions about poor performance of Fisher-z methods are largely unfounded, especially with a more appropriate z-to-r transformation. The Appendix gives a computer program for obtaining Pearson-r moments from a normal Fisher-z distribution, which is used to demonstrate distortion due to direct z-to-r transformation of a mean Fisher-z correlation.
Hseu, Zeng-Yei; Zehetner, Franz
2014-01-01
This study compared the extractability of Cd, Cu, Ni, Pb, and Zn by 8 extraction protocols for 22 representative rural soils in Taiwan and correlated the extractable amounts of the metals with their uptake by Chinese cabbage for developing an empirical model to predict metal phytoavailability based on soil properties. Chemical agents in these protocols included dilute acids, neutral salts, and chelating agents, in addition to water and the Rhizon soil solution sampler. The highest concentrations of extractable metals were observed in the HCl extraction and the lowest in the Rhizon sampling method. The linear correlation coefficients between extractable metals in soil pools and metals in shoots were higher than those in roots. Correlations between extractable metal concentrations and soil properties were variable; soil pH, clay content, total metal content, and extractable metal concentration were considered together to simulate their combined effects on crop uptake by an empirical model. This combination improved the correlations to different extents for different extraction methods, particularly for Pb, for which the extractable amounts with any extraction protocol did not correlate with crop uptake by simple correlation analysis. PMID:25295297
Correlation of the Summary Method with Learning Styles
ERIC Educational Resources Information Center
Sarikcioglu, Levent; Senol, Yesim; Yildirim, Fatos B.; Hizay, Arzu
2011-01-01
The summary is the last part of the lesson but one of the most important. We aimed to study the relationship between the preference of the summary method (video demonstration, question-answer, or brief review of slides) and learning styles. A total of 131 students were included in the present study. An inventory was prepared to understand the…
Computerized assessment of placental calcification post-ultrasound: a novel software tool.
Moran, M; Higgins, M; Zombori, G; Ryan, J; McAuliffe, F M
2013-05-01
Placental calcification is associated with an increased risk of perinatal morbidity and mortality. The subjectivity of current ultrasound methods of assessment of placental calcification indicates that a more objective method is required. The aim of this study was to correlate the percentage of calcification defined by the clinician using a new software tool for calculating the extent of placental calcification with traditional ultrasound methods and with pregnancy outcome. Ninety placental images were individually assessed. An upper threshold was defined, based on high intensity, to quantify calcification within the placenta. Output metrics were then produced including the overall percentage of calcification with respect to the total number of pixels within the region of interest. The results were correlated with traditional ultrasound methods of assessment of placental calcification and with pregnancy outcome. The results demonstrate a significant correlation between placental calcification, as defined using the software, and traditional methods of Grannum grading of placental calcification. Whilst correlation with perinatal outcome and cord pH was not significant as a result of small numbers, patients with placental calcification assessed using the computerized software at the upper quartile had higher rates of poor perinatal outcome when compared with those at the lower quartile (8/22 (36%) vs 3/23 (13%); P = 0.069). These results suggest that this computerized software tool has the potential to become an alternative method of assessing placental calcification. Copyright © 2012 ISUOG. Published by John Wiley & Sons Ltd.
Electrical condition monitoring method for polymers
Watkins, Jr., Kenneth S.; Morris, Shelby J [Hampton, VA; Masakowski, Daniel D [Worcester, MA; Wong, Ching Ping [Duluth, GA; Luo, Shijian [Boise, ID
2008-08-19
An electrical condition monitoring method utilizes measurement of electrical resistivity of an age sensor made of a conductive matrix or composite disposed in a polymeric structure such as an electrical cable. The conductive matrix comprises a base polymer and conductive filler. The method includes communicating the resistivity to a measuring instrument and correlating resistivity of the conductive matrix of the polymeric structure with resistivity of an accelerated-aged conductive composite.
Hybrid Theory of P-Wave Electron-Hydrogen Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, Anand
2012-01-01
We report on a study of electron-hydrogen scattering, using a combination of a modified method of polarized orbitals and the optical potential formalism. The calculation is restricted to P waves in the elastic region, where the correlation functions are of Hylleraas type. It is found that the phase shifts are not significantly affected by the modification of the target function by a method similar to the method of polarized orbitals and they are close to the phase shifts calculated earlier by Bhatia. This indicates that the correlation function is general enough to include the target distortion (polarization) in the presence of the incident electron. The important fact is that in the present calculation, to obtain similar results only 35-term correlation function is needed in the wave function compared to the 220-term wave function required in the above-mentioned previous calculation. Results for the phase shifts, obtained in the present hybrid formalism, are rigorous lower bounds to the exact phase shifts.
Efficient calculation of beyond RPA correlation energies in the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Beuerle, Matthias; Graf, Daniel; Schurkus, Henry F.; Ochsenfeld, Christian
2018-05-01
We present efficient methods to calculate beyond random phase approximation (RPA) correlation energies for molecular systems with up to 500 atoms. To reduce the computational cost, we employ the resolution-of-the-identity and a double-Laplace transform of the non-interacting polarization propagator in conjunction with an atomic orbital formalism. Further improvements are achieved using integral screening and the introduction of Cholesky decomposed densities. Our methods are applicable to the dielectric matrix formalism of RPA including second-order screened exchange (RPA-SOSEX), the RPA electron-hole time-dependent Hartree-Fock (RPA-eh-TDHF) approximation, and RPA renormalized perturbation theory using an approximate exchange kernel (RPA-AXK). We give an application of our methodology by presenting RPA-SOSEX benchmark results for the L7 test set of large, dispersion dominated molecules, yielding a mean absolute error below 1 kcal/mol. The present work enables calculating beyond RPA correlation energies for significantly larger molecules than possible to date, thereby extending the applicability of these methods to a wider range of chemical systems.
R package to estimate intracluster correlation coefficient with confidence interval for binary data.
Chakraborty, Hrishikesh; Hossain, Akhtar
2018-03-01
The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.
McLeod, Jessica; Chen, Tzu-An; Nicklas, Theresa A.; Baranowski, Tom
2013-01-01
Abstract Background Television viewing is an important modifiable risk factor for childhood obesity. However, valid methods for measuring children's TV viewing are sparse and few studies have included Latinos, a population disproportionately affected by obesity. The goal of this study was to test the reliability and convergent validity of four TV viewing measures among low-income Latino preschool children in the United States. Methods Latino children (n=96) ages 3–5 years old were recruited from four Head Start centers in Houston, Texas (January, 2009, to June, 2010). TV viewing was measured concurrently over 7 days by four methods: (1) TV diaries (parent reported), (2) sedentary time (accelerometry), (3) TV Allowance (an electronic TV power meter), and (4) Ecological Momentary Assessment (EMA) on personal digital assistants (parent reported). This 7-day procedure was repeated 3–4 weeks later. Test–retest reliability was determined by intraclass correlations (ICC). Spearman correlations (due to nonnormal distributions) were used to determine convergent validity compared to the TV diary. Results The TV diary had the highest test–retest reliability (ICC=0.82, p<0.001), followed by the TV Allowance (ICC=0.69, p<0.001), EMA (ICC=0.46, p<0.001), and accelerometry (ICC=0.36–0.38, p<0.01). The TV Allowance (r=0.45–0.55, p<0.001) and EMA (r=0.47–0.51, p<0.001) methods were significantly correlated with TV diaries. Accelerometer-determined sedentary minutes were not correlated with TV diaries. The TV Allowance and EMA methods were significantly correlated with each other (r=0.48–0.53, p<0.001). Conclusions The TV diary is feasible and is the most reliable method for measuring US Latino preschool children's TV viewing. PMID:23270534
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirrione,M.M.; Mirrione, M.M.; Schulz, D.
2009-12-06
The learned helplessness paradigm has been repeatedly shown to correlate with neurobiological aspects of depression in humans. In this model, rodents are exposed inescapable foot-shock in order to reveal susceptibility to escape deficit, defined as 'learned helplessness' (LH). Few methods are available to probe the neurobiological aspects underlying the differences in susceptibility in the living animal, thus far being limited to studies examining regional neurochemical changes with microdialysis. With the widespread implementation of small animal neuroimaging methods, including positron emission tomography (PET), it is now possible to explore the living brain on a systems level to define regional changes thatmore » may correlate with vulnerability to stress. In this study, 12 wild type Sprague-Dawley rats were exposed to 40 minutes of inescapable foot-shock followed by metabolic imaging using 2-deoxy-2[{sup 18}F]fluoro-D-glucose (18-FDG) 1 hour later. The escape test was performed on these rats 48 hours later (to accommodate radiotracer decay), where they were given the opportunity to press a lever to shut off the shock. A region of interest (ROI) analysis was used to investigate potential correlations (Pearson Regression Coefficients) between regional 18-FDG uptake following inescapable shock and subsequent learned helpless behavior (time to finish the test; number of successful lever presses within 20 seconds of shock onset). ROI analysis revealed a significant positive correlation between time to finish and 18-FDG uptake, and a negative correlation between lever presses and uptake, in the medial thalamic area (p=0.033, p=0.036). This ROI included the paraventricular thalamus, mediodorsal thalamus, and the habenula. In an effort to account for possible spillover artifact, the posterior thalamic area (including ventral medial and lateral portions) was also evaluated but did not reveal significant correlations (p=0.870, p=0.897). No other significant correlations were found in additional regions analyzed including the nucleus accumbens, caudate putamen, substantia nigra, and amygdala. These data suggest that medial thalamic 18-FDG uptake during inescapable shock may contribute to subsequent escape deficits, and are not confounded by shock effects per se, since all animals received the same treatment prior to scanning. We have previously explored 18-FDG differences following the escape test session which also showed hyperactivity in the medial thalamus of learned helpless animals compared to non-learned helpless, and included additional cortical-limbic changes. Given the neuroanatomical connections between the medial thalamus (and habenula) with the prefrontal cortex and monoaminergic brain stem, one possible speculation is that abnormal neuronal activity in these areas during stress may set in motion circuitry changes that correlate with learned helpless behavior.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Charles; Penchoff, Deborah A.; Wilson, Angela K., E-mail: wilson@chemistry.msu.edu
2015-11-21
An effective approach for the determination of lanthanide energetics, as demonstrated by application to the third ionization energy (in the gas phase) for the first half of the lanthanide series, has been developed. This approach uses a combination of highly correlated and fully relativistic ab initio methods to accurately describe the electronic structure of heavy elements. Both scalar and fully relativistic methods are used to achieve an approach that is both computationally feasible and accurate. The impact of basis set choice and the number of electrons included in the correlation space has also been examined.
A New Methodology of Spatial Cross-Correlation Analysis
Chen, Yanguang
2015-01-01
Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran’s index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson’s correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China’s urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes. PMID:25993120
A new methodology of spatial cross-correlation analysis.
Chen, Yanguang
2015-01-01
Spatial correlation modeling comprises both spatial autocorrelation and spatial cross-correlation processes. The spatial autocorrelation theory has been well-developed. It is necessary to advance the method of spatial cross-correlation analysis to supplement the autocorrelation analysis. This paper presents a set of models and analytical procedures for spatial cross-correlation analysis. By analogy with Moran's index newly expressed in a spatial quadratic form, a theoretical framework is derived for geographical cross-correlation modeling. First, two sets of spatial cross-correlation coefficients are defined, including a global spatial cross-correlation coefficient and local spatial cross-correlation coefficients. Second, a pair of scatterplots of spatial cross-correlation is proposed, and the plots can be used to visually reveal the causality behind spatial systems. Based on the global cross-correlation coefficient, Pearson's correlation coefficient can be decomposed into two parts: direct correlation (partial correlation) and indirect correlation (spatial cross-correlation). As an example, the methodology is applied to the relationships between China's urbanization and economic development to illustrate how to model spatial cross-correlation phenomena. This study is an introduction to developing the theory of spatial cross-correlation, and future geographical spatial analysis might benefit from these models and indexes.
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
Statistical physics in foreign exchange currency and stock markets
NASA Astrophysics Data System (ADS)
Ausloos, M.
2000-09-01
Problems in economy and finance have attracted the interest of statistical physicists all over the world. Fundamental problems pertain to the existence or not of long-, medium- or/and short-range power-law correlations in various economic systems, to the presence of financial cycles and on economic considerations, including economic policy. A method like the detrended fluctuation analysis is recalled emphasizing its value in sorting out correlation ranges, thereby leading to predictability at short horizon. The ( m, k)-Zipf method is presented for sorting out short-range correlations in the sign and amplitude of the fluctuations. A well-known financial analysis technique, the so-called moving average, is shown to raise questions to physicists about fractional Brownian motion properties. Among spectacular results, the possibility of crash predictions has been demonstrated through the log-periodicity of financial index oscillations.
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1980-01-01
The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.
Resistance Training: Identifying Best Practices?
2010-02-18
pretest – posttest research designs. In such cases, the correlation of pretest scores with posttest scores affects the sampling variance that is the ES...magnitude of the pretest – posttest correlation (see Appendix A). Strength was measured more than twice in some studies. When this was the case, ES was...included in this review used a pretest – posttest design. For this reason, methods described by Morris and DeShon (2002) were applied to compute
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
Replica Analysis for Portfolio Optimization with Single-Factor Model
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
Cetin, K.O.; Seed, R.B.; Der Kiureghian, A.; Tokimatsu, K.; Harder, L.F.; Kayen, R.E.; Moss, R.E.S.
2004-01-01
This paper presents'new correlations for assessment of the likelihood of initiation (or triggering) of soil liquefaction. These new correlations eliminate several sources of bias intrinsic to previous, similar correlations, and provide greatly reduced overall uncertainty and variance. Key elements in the development of these new correlations are (1) accumulation of a significantly expanded database of field performance case histories; (2) use of improved knowledge and understanding of factors affecting interpretation of standard penetration test data; (3) incorporation of improved understanding of factors affecting site-specific earthquake ground motions (including directivity effects, site-specific response, etc.); (4) use of improved methods for assessment of in situ cyclic shear stress ratio; (5) screening of field data case histories on a quality/uncertainty basis; and (6) use of high-order probabilistic tools (Bayesian updating). The resulting relationships not only provide greatly reduced uncertainty, they also help to resolve a number of corollary issues that have long been difficult and controversial including: (1) magnitude-correlated duration weighting factors, (2) adjustments for fines content, and (3) corrections for overburden stress. ?? ASCE.
Zheng, Mingguo; Chen, Xiaoan
2015-01-01
Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng’s tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I 30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I 30 is equally correlated with soil loss as factors of I 20, EI 10 (the product of the rainfall kinetic energy, E, and I 10), EI 20 and EI 30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I 30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations. PMID:25781173
Zheng, Mingguo; Chen, Xiaoan
2015-01-01
Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng's tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I30 is equally correlated with soil loss as factors of I20, EI10 (the product of the rainfall kinetic energy, E, and I10), EI20 and EI30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations.
NASA Astrophysics Data System (ADS)
Mengis, Nadine; Keller, David P.; Oschlies, Andreas
2018-01-01
This study introduces the Systematic Correlation Matrix Evaluation (SCoMaE) method, a bottom-up approach which combines expert judgment and statistical information to systematically select transparent, nonredundant indicators for a comprehensive assessment of the state of the Earth system. The methods consists of two basic steps: (1) the calculation of a correlation matrix among variables relevant for a given research question and (2) the systematic evaluation of the matrix, to identify clusters of variables with similar behavior and respective mutually independent indicators. Optional further analysis steps include (3) the interpretation of the identified clusters, enabling a learning effect from the selection of indicators, (4) testing the robustness of identified clusters with respect to changes in forcing or boundary conditions, (5) enabling a comparative assessment of varying scenarios by constructing and evaluating a common correlation matrix, and (6) the inclusion of expert judgment, for example, to prescribe indicators, to allow for considerations other than statistical consistency. The example application of the SCoMaE method to Earth system model output forced by different CO2 emission scenarios reveals the necessity of reevaluating indicators identified in a historical scenario simulation for an accurate assessment of an intermediate-high, as well as a business-as-usual, climate change scenario simulation. This necessity arises from changes in prevailing correlations in the Earth system under varying climate forcing. For a comparative assessment of the three climate change scenarios, we construct and evaluate a common correlation matrix, in which we identify robust correlations between variables across the three considered scenarios.
NASA Astrophysics Data System (ADS)
Galler, Anna; Gunacker, Patrik; Tomczak, Jan; Thunström, Patrik; Held, Karsten
Recently, approaches such as the dynamical vertex approximation (D ΓA) or the dual-fermion method have been developed. These diagrammatic approaches are going beyond dynamical mean field theory (DMFT) by including nonlocal electronic correlations on all length scales as well as the local DMFT correlations. Here we present our efforts to extend the D ΓA methodology to ab-initio materials calculations (ab-initio D ΓA). Our approach is a unifying framework which includes both GW and DMFT-type of diagrams, but also important nonlocal correlations beyond, e.g. nonlocal spin fluctuations. In our multi-band implementation we are using a worm sampling technique within continuous-time quantum Monte Carlo in the hybridization expansion to obtain the DMFT vertex, from which we construct the reducible vertex function using the two particle-hole ladders. As a first application we show results for transition metal oxides. Support by the ERC project AbinitioDGA (306447) is acknowledged.
Riemann correlator in de Sitter including loop corrections from conformal fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröb, Markus B.; Verdaguer, Enric; Roura, Albert, E-mail: mfroeb@ffn.ub.edu, E-mail: albert.roura@uni-ulm.de, E-mail: enric.verdaguer@ub.edu
2014-07-01
The Riemann correlator with appropriately raised indices characterizes in a gauge-invariant way the quantum metric fluctuations around de Sitter spacetime including loop corrections from matter fields. Specializing to conformal fields and employing a method that selects the de Sitter-invariant vacuum in the Poincaré patch, we obtain the exact result for the Riemann correlator through order H{sup 4}/m{sub p}{sup 4}. The result is expressed in a manifestly de Sitter-invariant form in terms of maximally symmetric bitensors. Its behavior for both short and long distances (sub- and superhorizon scales) is analyzed in detail. Furthermore, by carefully taking the flat-space limit, the explicitmore » result for the Riemann correlator for metric fluctuations around Minkowki spacetime is also obtained. Although the main focus is on free scalar fields (our calculation corresponds then to one-loop order in the matter fields), the result for general conformal field theories is also derived.« less
Correlation of Space Shuttle Landing Performance with Post-Flight Cardiovascular Dysfunction
NASA Technical Reports Server (NTRS)
McCluskey, R.
2004-01-01
Introduction: Microgravity induces cardiovascular adaptations resulting in orthostatic intolerance on re-exposure to normal gravity. Orthostasis could interfere with performance of complex tasks during the re-entry phase of Shuttle landings. This study correlated measures of Shuttle landing performance with post-flight indicators of orthostatic intolerance. Methods: Relevant Shuttle landing performance parameters routinely recorded at touchdown by NASA included downrange and crossrange distances, airspeed, and vertical speed. Measures of cardiovascular changes were calculated from operational stand tests performed in the immediate post-flight period on mission commanders from STS-41 to STS-66. Stand test data analyzed included maximum standing heart rate, mean increase in maximum heart rate, minimum standing systolic blood pressure, and mean decrease in standing systolic blood pressure. Pearson correlation coefficients were calculated with the null hypothesis that there was no statistically significant linear correlation between stand test results and Shuttle landing performance. A correlation coefficient? 0.5 with a p<0.05 was considered significant. Results: There were no significant linear correlations between landing performance and measures of post-flight cardiovascular dysfunction. Discussion: There was no evidence that post-flight cardiovascular stand test data correlated with Shuttle landing performance. This implies that variations in landing performance were not due to space flight-induced orthostatic intolerance.
Berge, Jerica M.; Meyer, Craig; MacLehose, Richard F.; Crichlow, Renee; Neumark-Sztainer, Dianne
2015-01-01
Objective To examine whether and how parents’ and adolescent siblings’ weight and weight-related behaviors are correlated. Results will inform which family members may be important to include in adolescent obesity prevention interventions. Design and Methods Data from two linked population-based studies, EAT 2010 and F-EAT, were used for cross-sectional analyses. Parents (n=58; 91% females; mean age=41.7 years) and adolescent siblings (sibling #1 n=58, 50% girls, mean age=14.3 years; sibling #2 n=58, 64% Girls, mean age=14.8) were socioeconomically and racially/ethnically diverse. Results Some weight-related behaviors between adolescent siblings were significantly positively correlated (i.e., fast food consumption, breakfast frequency, sedentary patterns, p<0.05). There were no significant correlations between parent weight and weight-related behaviors and adolescent siblings’ same behaviors. Some of the significant correlations found between adolescent siblings’ weight-related behaviors were statistically different from correlations between parents’ and adolescent siblings’ weight-related behaviors. Conclusions Although not consistently, adolescent siblings’ weight-related behaviors were significantly correlated as compared to parents’ and adolescent siblings’ weight-related behaviors. It may be important to consider including siblings in adolescent obesity prevention interventions or in recommendations healthcare providers give to adolescents regarding their weight and weight-related behaviors. PMID:25820257
Carrión-García, Cayetano Javier; Guerra-Hernández, Eduardo J; García-Villanova, Belén; Molina-Montes, Esther
2017-06-01
We aimed to quantify and compare dietary non-enzymatic antioxidant capacity (NEAC), estimated using two dietary assessment methods, and to explore its relationship with plasma NEAC. Fifty healthy subjects volunteer to participate in this study. Two dietary assessment methods [a food frequency questionnaire (FFQ) and a 24-hour recall (24-HR)] were used to collect dietary information. Dietary NEAC, including oxygen radical absorbance capacity (ORAC), total polyphenols, ferric-reducing antioxidant power (FRAP) and trolox equivalent antioxidant capacity, was estimated using several data sources of NEAC content in food. NEAC status was measured in fasting blood samples using the same assays. We performed nonparametric Spearman's correlation analysis between pairs of dietary NEAC (FFQ and 24-HR) and diet-plasma NEAC, with and without the contribution of coffee's NEAC. Partial correlation analysis was used to estimate correlations regardless of variables potentially influencing these relationships. FFQ-based NEAC and 24-HR-based NEAC were moderately correlated, with correlation coefficients ranging from 0.54 to 0.71, after controlling for energy intake, age and sex. Statistically significant positive correlations were found for dietary FRAP, either derived from the FFQ or the 24-HR, with plasma FRAP (r ~ 0.30). This weak, albeit statistically significant, correlation for FRAP was mostly present in the fruits and vegetables food groups. Plasma ORAC without proteins and 24-HR-based total ORAC were also positively correlated (r = 0.35). The relationship between dietary NEAC and plasma FRAP and ORAC suggests the dietary NEAC may reflect antioxidant status despite its weak in vivo potential, supporting further its use in oxidative stress-related disease epidemiology.
Method for measuring recovery of catalytic elements from fuel cells
Shore, Lawrence [Edison, NJ; Matlin, Ramail [Berkeley, NJ
2011-03-08
A method is provided for measuring the concentration of a catalytic clement in a fuel cell powder. The method includes depositing on a porous substrate at least one layer of a powder mixture comprising the fuel cell powder and an internal standard material, ablating a sample of the powder mixture using a laser, and vaporizing the sample using an inductively coupled plasma. A normalized concentration of catalytic element in the sample is determined by quantifying the intensity of a first signal correlated to the amount of catalytic element in the sample, quantifying the intensity of a second signal correlated to the amount of internal standard material in the sample, and using a ratio of the first signal intensity to the second signal intensity to cancel out the effects of sample size.
Hayer, Prabhnoor Singh; Deane, Anit Kumar Samuel; Agrawal, Atul; Maheshwari, Rajesh; Juyal, Anil
2016-04-01
Osteoporosis is a metabolic bone disease caused by progressive bone loss. It is characterized by low Bone Mineral Density (BMD) and structural deterioration of bone tissue leading to bone fragility and increased risk of fractures. When classifying a fracture, high reliability and validity are crucial for successful treatment. Furthermore, a classification system should include severity, method of treatment, and prognosis for any given fracture. Since it is known that treatment significantly influences prognosis, a classification system claiming to include both would be desirable. Since there is no such classification system, which includes both the fracture type and the osteoporosis severity, we tried to find a correlation between fracture severity and osteoporosis severity. The aim of the study was to evaluate whether the AO/ASIF fracture classification system, which indicates the severity of fractures, has any relationship with the bone mineral status in patients with primary osteoporosis. We hypothesized that fracture severity and severity of osteoporosis should show some correlation. An observational analytical study was conducted over a period of one year during which 49 patients were included in the study at HIMS, SRH University, Dehradun. The osteoporosis status of all the included patients with a pertrochanteric fracture was documented using a DEXA scan and T-Score (BMD) was calculated. All patients had a trivial trauma. All the fractures were classified as per AO/ASIF classification. Pearson Correlation between BMD and fracture type was calculated. Data was entered on Microsoft Office Excel version 2007 and Interpretation and analysis of obtained data was done using summary statistics. Pearson Correlation between BMD and fracture type was calculated using the SPSS software version 22.0. The average age of the patients included in the study was 71.2 years and the average bone mineral density was -4.9. The correlation between BMD and fracture type was calculated and the r-values obtained was 0.180, which showed low a correlation and p-value was 0.215, which was insignificant. Statistically the pertrochanteric fracture configuration as per AO Classification does not correlate with the osteoporosis severity of the patient.
Interpretation of correlations in clinical research.
Hung, Man; Bounsanga, Jerry; Voss, Maren Wright
2017-11-01
Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.
Integrated Data Collection and Analysis Project: Friction Correlation Study
2015-08-01
methods authorized in AOP-7 include Pendulum Friction, Rotary Friction, Sliding Friction (ABL), BAM Friction and Steel/Fiber Shoe Methods. The...sensitivity can be obtained by Pendulum Friction, Rotary Friction, Sliding Friction (such as the ABL), BAM Friction and Steel/Fiber Shoe Methods.3, 4 Within...Figure 4.16 A variable compressive force is applied downward through the wheel hydraulically (50-1995 psi). The 5 kg pendulum impacts (8 ft/sec is the
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Measuring NMHC and NMOG emissions from motor vehicles via FTIR spectroscopy
NASA Astrophysics Data System (ADS)
Gierczak, Christine A.; Kralik, Lora L.; Mauti, Adolfo; Harwell, Amy L.; Maricq, M. Matti
2017-02-01
The determination of non-methane organic gases (NMOG) emissions according to United States Environmental Protection Agency (EPA) regulations is currently a multi-step process requiring separate measurement of various emissions components by a number of independent on-line and off-line techniques. The Fourier transform infrared spectroscopy (FTIR) method described in this paper records all required components using a single instrument. It gives data consistent with the regulatory method, greatly simplifies the process, and provides second by second time resolution. Non-methane hydrocarbons (NMHCs) are measured by identifying a group of hydrocarbons, including oxygenated species, that serve as a surrogate for this class, the members of which are dynamically included if they are present in the exhaust above predetermined threshold levels. This yields an FTIR equivalent measure of NMHC that correlates within 5% to the regulatory flame ionization detection (FID) method. NMOG is then determined per regulatory calculation solely from FTIR recorded emissions of NMHC, ethanol, acetaldehyde, and formaldehyde, yielding emission rates that also correlate within 5% with the reference method. Examples are presented to show how the resulting time resolved data benefit aftertreatment development for light duty vehicles.
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2.
Solomonik, Victor G; Smirnov, Alexander N; Navarkin, Ilya S
2016-04-14
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2
NASA Astrophysics Data System (ADS)
Solomonik, Victor G.; Smirnov, Alexander N.; Navarkin, Ilya S.
2016-04-01
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
A Simple Method for Causal Analysis of Return on IT Investment
Alemi, Farrokh; Zargoush, Manaf; Oakes, James L.; Edrees, Hanan
2011-01-01
This paper proposes a method for examining the causal relationship among investment in information technology (IT) and the organization's productivity. In this method, first a strong relationship among (1) investment in IT, (2) use of IT and (3) organization's productivity is verified using correlations. Second, the assumption that IT investment preceded improved productivity is tested using partial correlation. Finally, the assumption of what may have happened in the absence of IT investment, the so called counterfactual, is tested through forecasting productivity at different levels of investment. The paper applies the proposed method to investment in the Veterans Health Information Systems and Technology Architecture (VISTA) system. Result show that the causal analysis can be done, even with limited data. Furthermore, because the procedure relies on overall organization's productivity, it might be more objective than when the analyst picks and chooses which costs and benefits should be included in the analysis. PMID:23019515
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
Validation of Field Methods to Assess Body Fat Percentage in Elite Youth Soccer Players.
Munguia-Izquierdo, Diego; Suarez-Arrones, Luis; Di Salvo, Valter; Paredes-Hernandez, Victor; Alcazar, Julian; Ara, Ignacio; Kreider, Richard; Mendez-Villanueva, Alberto
2018-05-01
This study determined the most effective field method for quantifying body fat percentage in male elite youth soccer players and developed prediction equations based on anthropometric variables. Forty-four male elite-standard youth soccer players aged 16.3-18.0 years underwent body fat percentage assessments, including bioelectrical impedance analysis and the calculation of various skinfold-based prediction equations. Dual X-ray absorptiometry provided a criterion measure of body fat percentage. Correlation coefficients, bias, limits of agreement, and differences were used as validity measures, and regression analyses were used to develop soccer-specific prediction equations. The equations from Sarria et al. (1998) and Durnin & Rahaman (1967) reached very large correlations and the lowest biases, and they reached neither the practically worthwhile difference nor the substantial difference between methods. The new youth soccer-specific skinfold equation included a combination of triceps and supraspinale skinfolds. None of the practical methods compared in this study are adequate for estimating body fat percentage in male elite youth soccer players, except for the equations from Sarria et al. (1998) and Durnin & Rahaman (1967). The new youth soccer-specific equation calculated in this investigation is the only field method specifically developed and validated in elite male players, and it shows potentially good predictive power. © Georg Thieme Verlag KG Stuttgart · New York.
Caricato, Marco
2018-04-07
We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.
NASA Astrophysics Data System (ADS)
Caricato, Marco
2018-04-01
We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.
Benefit Finding in Maternal Caregivers of Pediatric Cancer Survivors: A Mixed Methods Approach.
Willard, Victoria W; Hostetter, Sarah A; Hutchinson, Katherine C; Bonner, Melanie J; Hardy, Kristina K
2016-09-01
Benefit finding has been described as the identification of positive effects resulting from otherwise stressful experiences. In this mixed methods study, we examined the relations between qualitative themes related to benefit finding and quantitative measures of psychosocial adjustment and coping as reported by maternal caregivers of survivors of pediatric cancer. Female caregivers of survivors of pediatric cancer (n = 40) completed a qualitative questionnaire about their experiences caring for their child, along with several quantitative measures. Qualitative questionnaires were coded for salient themes, including social support and personal growth. Correlation matrices evaluated associations between qualitative themes and quantitative measures of stress and coping. Identified benefits included social support and personal growth, as well as child-specific benefits. Total benefits reported were significantly positively correlated with availability of emotional resources. Coping methods were also associated, with accepting responsibility associated with fewer identified benefits. Despite the stress of their child's illness, many female caregivers of survivors of pediatric cancer reported finding benefits associated with their experience. Benefit finding in this sample was associated with better adjustment. © 2016 by Association of Pediatric Hematology/Oncology Nurses.
Correlative cryo-fluorescence light microscopy and cryo-electron tomography of Streptomyces.
Koning, Roman I; Celler, Katherine; Willemse, Joost; Bos, Erik; van Wezel, Gilles P; Koster, Abraham J
2014-01-01
Light microscopy and electron microscopy are complementary techniques that in a correlative approach enable identification and targeting of fluorescently labeled structures in situ for three-dimensional imaging at nanometer resolution. Correlative imaging allows electron microscopic images to be positioned in a broader temporal and spatial context. We employed cryo-correlative light and electron microscopy (cryo-CLEM), combining cryo-fluorescence light microscopy and cryo-electron tomography, on vitrified Streptomyces bacteria to study cell division. Streptomycetes are mycelial bacteria that grow as long hyphae and reproduce via sporulation. On solid media, Streptomyces subsequently form distinct aerial mycelia where cell division leads to the formation of unigenomic spores which separate and disperse to form new colonies. In liquid media, only vegetative hyphae are present divided by noncell separating crosswalls. Their multicellular life style makes them exciting model systems for the study of bacterial development and cell division. Complex intracellular structures have been visualized with transmission electron microscopy. Here, we describe the methods for cryo-CLEM that we applied for studying Streptomyces. These methods include cell growth, fluorescent labeling, cryo-fixation by vitrification, cryo-light microscopy using a Linkam cryo-stage, image overlay and relocation, cryo-electron tomography using a Titan Krios, and tomographic reconstruction. Additionally, methods for segmentation, volume rendering, and visualization of the correlative data are described. © 2014 Elsevier Inc. All rights reserved.
Duell, L. F. W.
1988-01-01
In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared to other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy budget measurements. Penman-combination potential ET estimates were determined to be unusable because they overestimated actual ET. Modification in the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 300 mm at a low-density scrub site to 1,100 mm at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied. (Author 's abstract)
Litsas, George; Lucchese, Alessandra
2016-01-01
Purpose: To investigate the relationship between dental, chronological, and cervical vertebral maturation growth in the peak growth period, as well as to study the association between the dental calcification phases and the skeletal maturity stages during the same growth period. Methods: Subjects were selected from orthodontic pre-treatment cohorts consisting of 420 subjects where 255 were identified and enrolled into the study, comprising 145 girls and 110 boys. The lateral cephalometric and panoramic radiographs were examined from the archives of the Department of Orthodontics, Aristotle University of Thessaloniki, Greece. Dental age was assessed according to the method of Demirjian, and skeletal maturation according to the Cervical Vertebral Maturation Method. Statistical elaboration included Spearman Brown formula, descriptive statistics, Pearson’s correlation coefficient and regression analysis, paired samples t-test, and Spearman’s rho correlation coefficient. Results: Chronological and dental age showed a high correlation for both gender(r =0.741 for boys, r = 0.770 for girls, p<0.001). The strongest correlation was for the CVM Stage IV for both males (r=0.554) and females (r=0.68). The lowest correlation was for the CVM Stage III in males (r=0.433, p<0.001) and for the CVM Stage II in females (r=0.393, p>0.001). The t-test revealed statistically significant differences between these variables (p<0.001) during the peak period. A statistically significant correlation (p<0.001) between tooth calcification and CVM stages was determined. The second molars showed the highest correlation with CVM stages (CVMS) (r= 0.65 for boys, r = 0.72 for girls). Conclusion: Dental age was more advanced than chronological for both boys and girls for all CVMS. During the peak period these differences were more pronounced. Moreover, all correlations between skeletal and dental stages were statistically significant. The second molars showed the highest correlation whereas the canines showed the lowest correlation for both gender. PMID:27335610
Detection of periodicity based on independence tests - III. Phase distance correlation periodogram
NASA Astrophysics Data System (ADS)
Zucker, Shay
2018-02-01
I present the Phase Distance Correlation (PDC) periodogram - a new periodicity metric, based on the Distance Correlation concept of Gábor Székely. For each trial period, PDC calculates the distance correlation between the data samples and their phases. PDC requires adaptation of the Székely's distance correlation to circular variables (phases). The resulting periodicity metric is best suited to sparse data sets, and it performs better than other methods for sawtooth-like periodicities. These include Cepheid and RR-Lyrae light curves, as well as radial velocity curves of eccentric spectroscopic binaries. The performance of the PDC periodogram in other contexts is almost as good as that of the Generalized Lomb-Scargle periodogram. The concept of phase distance correlation can be adapted also to astrometric data, and it has the potential to be suitable also for large evenly spaced data sets, after some algorithmic perfection.
Cross-comparison and evaluation of air pollution field estimation methods
NASA Astrophysics Data System (ADS)
Yu, Haofei; Russell, Armistead; Mulholland, James; Odman, Talat; Hu, Yongtao; Chang, Howard H.; Kumar, Naresh
2018-04-01
Accurate estimates of human exposure is critical for air pollution health studies and a variety of methods are currently being used to assign pollutant concentrations to populations. Results from these methods may differ substantially, which can affect the outcomes of health impact assessments. Here, we applied 14 methods for developing spatiotemporal air pollutant concentration fields of eight pollutants to the Atlanta, Georgia region. These methods include eight methods relying mostly on air quality observations (CM: central monitor; SA: spatial average; IDW: inverse distance weighting; KRIG: kriging; TESS-D: discontinuous tessellation; TESS-NN: natural neighbor tessellation with interpolation; LUR: land use regression; AOD: downscaled satellite-derived aerosol optical depth), one using the RLINE dispersion model, and five methods using a chemical transport model (CMAQ), with and without using observational data to constrain results. The derived fields were evaluated and compared. Overall, all methods generally perform better at urban than rural area, and for secondary than primary pollutants. We found the CM and SA methods may be appropriate only for small domains, and for secondary pollutants, though the SA method lead to large negative spatial correlations when using data withholding for PM2.5 (spatial correlation coefficient R = -0.81). The TESS-D method was found to have major limitations. Results of the IDW, KRIG and TESS-NN methods are similar. They are found to be better suited for secondary pollutants because of their satisfactory temporal performance (e.g. average temporal R2 > 0.85 for PM2.5 but less than 0.35 for primary pollutant NO2). In addition, they are suitable for areas with relatively dense monitoring networks due to their inability to capture spatial concentration variabilities, as indicated by the negative spatial R (lower than -0.2 for PM2.5 when assessed using data withholding). The performance of LUR and AOD methods were similar to kriging. Using RLINE and CMAQ fields without fusing observational data led to substantial errors and biases, though the CMAQ model captured spatial gradients reasonably well (spatial R = 0.45 for PM2.5). Two unique tests conducted here included quantifying autocorrelation of method biases (which can be important in time series analyses) and how well the methods capture the observed interspecies correlations (which would be of particular importance in multipollutant health assessments). Autocorrelation of method biases lasted longest and interspecies correlations of primary pollutants was higher than observations when air quality models were used without data fusing. Use of hybrid methods that combine air quality model outputs with observational data overcome some of these limitations and is better suited for health studies. Results from this study contribute to better understanding the strengths and weaknesses of different methods for estimating human exposures.
Quantitative analysis of tympanic membrane perforation: a simple and reliable method.
Ibekwe, T S; Adeosun, A A; Nwaorgu, O G
2009-01-01
Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.
Chalian, Hamid; Seyal, Adeel Rahim; Rezai, Pedram; Töre, Hüseyin Gürkan; Miller, Frank H; Bentrem, David J; Yaghmai, Vahid
2014-01-10
The accuracy for determining pancreatic cyst volume with commonly used spherical and ellipsoid methods is unknown. The role of CT volumetry in volumetric assessment of pancreatic cysts needs to be explored. To compare volumes of the pancreatic cysts by CT volumetry, spherical and ellipsoid methods and determine their accuracy by correlating with actual volume as determined by EUS-guided aspiration. Setting This is a retrospective analysis performed at a tertiary care center. Patients Seventy-eight pathologically proven pancreatic cysts evaluated with CT and endoscopic ultrasound (EUS) were included. Design The volume of fourteen cysts that had been fully aspirated by EUS was compared to CT volumetry and the routinely used methods (ellipsoid and spherical volume). Two independent observers measured all cysts using commercially available software to evaluate inter-observer reproducibility for CT volumetry. The volume of pancreatic cysts as determined by various methods was compared using repeated measures analysis of variance. Bland-Altman plot and intraclass correlation coefficient were used to determine mean difference and correlation between observers and methods. The error was calculated as the percentage of the difference between the CT estimated volumes and the aspirated volume divided by the aspirated one. CT volumetry was comparable to aspirated volume (P=0.396) with very high intraclass correlation (r=0.891, P<0.001) and small mean difference (0.22 mL) and error (8.1%). Mean difference with aspirated volume and error were larger for ellipsoid (0.89 mL, 30.4%; P=0.024) and spherical (1.73 mL, 55.5%; P=0.004) volumes than CT volumetry. There was excellent inter-observer correlation in volumetry of the entire cohort (r=0.997, P<0.001). CT volumetry is accurate and reproducible. Ellipsoid and spherical volume overestimate the true volume of pancreatic cysts.
Position, rotation, and intensity invariant recognizing method
Ochoa, Ellen; Schils, George F.; Sweeney, Donald W.
1989-01-01
A method for recognizing the presence of a particular target in a field of view which is target position, rotation, and intensity invariant includes the preparing of a target-specific invariant filter from a combination of all eigen-modes of a pattern of the particular target. Coherent radiation from the field of view is then imaged into an optical correlator in which the invariant filter is located. The invariant filter is rotated in the frequency plane of the optical correlator in order to produce a constant-amplitude rotational response in a correlation output plane when the particular target is present in the field of view. Any constant response is thus detected in the output The U.S. Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the U.S. Department of Energy and AT&T Technologies, Inc.
Trapezium Bone Density-A Comparison of Measurements by DXA and CT.
Breddam Mosegaard, Sebastian; Breddam Mosegaard, Kamille; Bouteldja, Nadia; Bæk Hansen, Torben; Stilling, Maiken
2018-01-18
Bone density may influence the primary fixation of cementless implants, and poor bone density may increase the risk of implant failure. Before deciding on using total joint replacement as treatment in osteoarthritis of the trapeziometacarpal joint, it is valuable to determine the trapezium bone density. The aim of this study was to: (1) determine the correlation between measurements of bone mineral density of the trapezium obtained by dual-energy X-ray absorptiometry (DXA) scans by a circumference method and a new inner-ellipse method; and (2) to compare those to measurements of bone density obtained by computerized tomography (CT)-scans in Hounsfield units (HU). We included 71 hands from 59 patients with a mean age of 59 years (43-77). All patients had Eaton-Glickel stage II-IV trapeziometacarpal (TM) joint osteoarthritis, were under evaluation for trapeziometacarpal total joint replacement, and underwent DXA and CT wrist scans. There was an excellent correlation (r = 0.94) between DXA bone mineral density measures using the circumference and the inner-ellipse method. There was a moderate correlation between bone density measures obtained by DXA- and CT-scans with (r = 0.49) for the circumference method, and (r = 0.55) for the inner-ellipse method. DXA may be used in pre-operative evaluation of the trapezium bone quality, and the simpler DXA inner-ellipse measurement method can replace the DXA circumference method in estimation of bone density of the trapezium.
Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul
2010-01-01
Methodologies for understanding the plastic deformation mechanisms related to crack propagation at the nano-, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.
Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul
2011-01-01
Methodologies for understanding the plastic deformation mechanisms related 10 crack propagation at the nano, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
Incoherent Diffractive Imaging via Intensity Correlations of Hard X Rays
NASA Astrophysics Data System (ADS)
Classen, Anton; Ayyer, Kartik; Chapman, Henry N.; Röhlsberger, Ralf; von Zanthier, Joachim
2017-08-01
Established x-ray diffraction methods allow for high-resolution structure determination of crystals, crystallized protein structures, or even single molecules. While these techniques rely on coherent scattering, incoherent processes like fluorescence emission—often the predominant scattering mechanism—are generally considered detrimental for imaging applications. Here, we show that intensity correlations of incoherently scattered x-ray radiation can be used to image the full 3D arrangement of the scattering atoms with significantly higher resolution compared to conventional coherent diffraction imaging and crystallography, including additional three-dimensional information in Fourier space for a single sample orientation. We present a number of properties of incoherent diffractive imaging that are conceptually superior to those of coherent methods.
Amanat, B; Kardan, M R; Faghihi, R; Hosseini Pooya, S M
2013-01-01
Background: Radon and its daughters are amongst the most important sources of natural exposure in the world. Soil is one of the significant sources of radon/thoron due to both radium and thorium so that the emanated thoron from it may cause increased uncertainties in radon measurements. Recently, a diffusion chamber has been designed and optimized for passive discriminative measurements of radon/thoron concentrations in soil. Objective: In order to evaluate the capability of the passive method, some comparative measurements (with active methods) have been performed. Method: The method is based upon measurements by a diffusion chamber, including two Lexan polycarbonate SSNTDs, which can discriminate the emanated radon/thorn from the soil by delay method. The comparative measurements have been done in ten selected points of HLNRA of Ramsar in Iran. The linear regression and correlation between the results of two methods have been studied. Results: The results show that the radon concentrations are within the range of 12.1 to 165 kBq/m3 values. The correlation between the results of active and passive methods was measured by 0.99 value. As well, the thoron concentrations have been measured between 1.9 to 29.5 kBq/m3 values at the points. Conclusion: The sensitivity as well as the strong correlation with active measurements shows that the new low-cost passive method is appropriate for accurate seasonal measurements of radon and thoron concentration in soil. PMID:25505760
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouchard, Chris; Chang, Chia Cheng; Kurth, Thorsten
In this paper, the Feynman-Hellmann theorem can be derived from the long Euclidean-time limit of correlation functions determined with functional derivatives of the partition function. Using this insight, we fully develop an improved method for computing matrix elements of external currents utilizing only two-point correlation functions. Our method applies to matrix elements of any external bilinear current, including nonzero momentum transfer, flavor-changing, and two or more current insertion matrix elements. The ability to identify and control all the systematic uncertainties in the analysis of the correlation functions stems from the unique time dependence of the ground-state matrix elements and the fact that all excited states and contact terms are Euclidean-time dependent. We demonstrate the utility of our method with a calculation of the nucleon axial charge using gradient-flowed domain-wall valence quarks on themore » $$N_f=2+1+1$$ MILC highly improved staggered quark ensemble with lattice spacing and pion mass of approximately 0.15 fm and 310 MeV respectively. We show full control over excited-state systematics with the new method and obtain a value of $$g_A = 1.213(26)$$ with a quark-mass-dependent renormalization coefficient.« less
Recent advances on terrain database correlation testing
NASA Astrophysics Data System (ADS)
Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art
1998-08-01
Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.
Konop, Katherine A; Strifling, Kelly M B; Wang, Mei; Cao, Kevin; Eastwood, Daniel; Jackson, Scott; Ackman, Jeffrey; Altiok, Haluk; Schwab, Jeffrey; Harris, Gerald F
2009-01-01
We evaluated the relationships between upper extremity (UE) kinetics and the energy expenditure index during anterior and posterior walker-assisted gait in children with spastic diplegic cerebral palsy (CP). Ten children (3 boys, 7 girls; mean age 12.1 years; range 8 to 18 years) with spastic diplegic CP, who ambulated with a walker underwent gait analyses that included UE kinematics and kinetics. Upper extremity kinetics were obtained using instrumented walker handles. Energy expenditure index was obtained using the heart rate method (EEIHR) by subtracting resting heart rate from walking heart rate, and dividing by the walking speed. Correlations were sought between the kinetic variables and the EEIHR and temporal and stride parameters. In general, anterior walker use was associated with a higher EEIHR. Several kinetic variables correlated well with temporal and stride parameters, as well as the EEIHR. All of the significant correlations (r>0.80; p<0.005) occurred during anterior walker use and involved joint reaction forces (JRF) rather than moments. Some variables showed multiple strong correlations during anterior walker use, including the medial JRF in the wrist, the posterior JRF in the elbow, and the inferior and superior JRFs in the shoulder. The observed correlations may indicate a relationship between the force used to advance the body forward within the walker frame and an increased EEIHR. More work is needed to refine the correlations, and to explore relationships with other variables, including the joint kinematics.
Adolescent Sedentary Behaviors: Correlates Differ for Television Viewing and Computer Use
Babey, Susan H.; Hastert, Theresa A.; Wolstein, Joelle
2013-01-01
Purpose Sedentary behavior is associated with obesity in youth. Understanding correlates of specific sedentary behaviors can inform the development of interventions to reduce sedentary time. The current research examines correlates of leisure computer use and television viewing among California adolescents. Methods Using data from the 2005 California Health Interview Survey (CHIS), we examined individual, family and environmental correlates of two sedentary behaviors among 4,029 adolescents: leisure computer use and television watching. Results Linear regression analyses adjusting for a range of factors indicated several differences in the correlates of television watching and computer use. Correlates of additional time spent watching television included male sex, American Indian and African American race, lower household income, lower levels of physical activity, lower parent educational attainment, and additional hours worked by parents. Correlates of a greater amount of time spent using the computer for fun included older age, Asian race, higher household income, lower levels of physical activity, less parental knowledge of free time activities, and living in neighborhoods with higher proportions of non-white residents and higher proportions of low-income residents. Only physical activity was associated similarly with both watching television and computer use. Conclusions These results suggest that correlates of time spent on television watching and leisure computer use are different. Reducing screen time is a potentially successful strategy in combating childhood obesity, and understanding differences in the correlates of different screen time behaviors can inform the development of more effective interventions to reduce sedentary time. PMID:23260837
Correlation between safety assessments in the driver-car interaction design process.
Broström, Robert; Bengtsson, Peter; Axelsson, Jakob
2011-05-01
With the functional revolution in modern cars, evaluation methods to be used in all phases of driver-car interaction design have gained importance. It is crucial for car manufacturers to discover and solve safety issues early in the interaction design process. A current problem is thus to find a correlation between the formative methods that are used during development and the summative methods that are used when the product has reached the customer. This paper investigates the correlation between efficiency metrics from summative and formative evaluations, where the results of two studies on sound and navigation system tasks are compared. The first, an analysis of the J.D. Power and Associates APEAL survey, consists of answers given by about two thousand customers. The second, an expert evaluation study, was done by six evaluators who assessed the layouts by task completion time, TLX and Nielsen heuristics. The results show a high degree of correlation between the studies in terms of task efficiency, i.e. between customer ratings and task completion time, and customer ratings and TLX. However, no correlation was observed between Nielsen heuristics and customer ratings, task completion time or TLX. The results of the studies introduce a possibility to develop a usability evaluation framework that includes both formative and summative approaches, as the results show a high degree of consistency between the different methodologies. Hence, combining a quantitative approach with the expert evaluation method, such as task completion time, should be more useful for driver-car interaction design. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Boers, A M; Marquering, H A; Jochem, J J; Besselink, N J; Berkhemer, O A; van der Lugt, A; Beenen, L F; Majoie, C B
2013-08-01
Cerebral infarct volume as observed in follow-up CT is an important radiologic outcome measure of the effectiveness of treatment of patients with acute ischemic stroke. However, manual measurement of CIV is time-consuming and operator-dependent. The purpose of this study was to develop and evaluate a robust automated measurement of the CIV. The CIV in early follow-up CT images of 34 consecutive patients with acute ischemic stroke was segmented with an automated intensity-based region-growing algorithm, which includes partial volume effect correction near the skull, midline determination, and ventricle and hemorrhage exclusion. Two observers manually delineated the CIV. Interobserver variability of the manual assessments and the accuracy of the automated method were evaluated by using the Pearson correlation, Bland-Altman analysis, and Dice coefficients. The accuracy was defined as the correlation with the manual assessment as a reference standard. The Pearson correlation for the automated method compared with the reference standard was similar to the manual correlation (R = 0.98). The accuracy of the automated method was excellent with a mean difference of 0.5 mL with limits of agreement of -38.0-39.1 mL, which were more consistent than the interobserver variability of the 2 observers (-40.9-44.1 mL). However, the Dice coefficients were higher for the manual delineation. The automated method showed a strong correlation and accuracy with the manual reference measurement. This approach has the potential to become the standard in assessing the infarct volume as a secondary outcome measure for evaluating the effectiveness of treatment.
High order neural correlates of social behavior in the honeybee brain.
Duer, Aron; Paffhausen, Benjamin H; Menzel, Randolf
2015-10-30
Honeybees are well established models of neural correlates of sensory function, learning and memory formation. Here we report a novel approach allowing to record high-order mushroom body-extrinsic interneurons in the brain of worker bees within a functional colony. New method The use of two 100 cm long twisted copper electrodes allowed recording of up to four units of mushroom body-extrinsic neurons simultaneously for up to 24h in animals moving freely between members of the colony. Every worker, including the recorded bee, hatched in the experimental environment. The group consisted of 200 animals in average. Animals explored different regions of the comb and interacted with other colony members. The activities of the units were not selective for locations on the comb, body directions with respect to gravity and olfactory signals on the comb, or different social interactions. However, combinations of these parameters defined neural activity in a unit-specific way. In addition, units recorded from the same animal co-varied according to unknown factors. Comparison with existing method(s): All electrophysiological studies with honey bees were performed so far on constrained animals outside their natural behavioral contexts. Yet no neuronal correlates were measured in a social context. Free mobility of recoded insects over a range of a quarter square meter allows addressing questions concerning neural correlates of social communication, planning of tasks within the colony and attention-like processes. The method makes it possible to study neural correlates of social behavior in a near-natural setting within the honeybee colony. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Piao, Lin; Fu, Zuntao
2016-11-01
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Piao, Lin; Fu, Zuntao
2016-11-09
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamiya, Muneaki; Hirata, So; Valiev, Marat
2008-02-19
Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of themore » cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.« less
Razi, Saeid; Ghoncheh, Mahshid; Mohammadian-Hafshejani, Abdollah; Aziznejhad, Hojjat; Mohammadian, Mahdi; Salehiniya, Hamid
2016-01-01
Background The incidence and mortality estimates of ovarian cancer based on human development are essential for planning by policy makers. This study is aimed at investigating the standardised incidence rates (SIR) and standardised mortality rates (SMR) of ovarian cancer and their relationship with the Human Development Index (HDI) in Asian countries. Methods This study was an ecologic study in Asia for assessment of the correlation between SIR, age standardised rates (ASR), and HDI and their details, including life expectancy at birth, mean years of schooling, and gross national income (GNI) per capita. We used the correlation bivariate method for assessment of the correlation between ASR and HDI, and its details. Statistical significance was assumed if P < 0.05. All reported P-values were two-sided. Statistical analyses were performed using SPSS (Version 15.0, SPSS Inc.). Results The highest SIR of ovarian cancer was observed in Singapore, Kazakhstan, and Brunei respectively. Indonesia, Brunei, and Afghanistan had the highest SMR. There was a positive correlation between the HDI and SIR (r = 0.143, p = 0.006). Correlation between SMR of ovarian cancer and HDI was not significant (r = 0.005, p = 052.0). Conclusion According to the findings of this study, between the HDI and SIR, there was a positive correlation, but there was no correlation between the SMR and HDI. PMID:27110284
Comparison and correlation of pelvic parameters between low-grade and high-grade spondylolisthesis.
Min, Woo-Kie; Lee, Chang-Hwa
2014-05-01
This study was retrospectively conducted on 51 patients with L5-S1 spondylolisthesis. This study was conducted to compare a total of 11 pelvic parameters, such as the level of displacement by Meyerding method, lumbar lordosis, sacral inclination, lumbosacral angle, slip angle, S2 inclination, pelvic incidence (PI), L5 inclination, L5 slope, pelvic tilt (PT), and sacral slope (SS) between low-grade and high-grade spondylolisthesis, and to investigate a correlation of the level of displacement by Meyerding method with other pelvic parameters. Pelvic parameters were measured using preoperational erect lateral spinal simple radiographs. The patients were divided into 39 patients with low-grade spondylolisthesis and 12 patients with high-grade spondylolisthesis before analysis. In all patients of both groups, 11 radiographic measurements including the level of displacement by Meyerding method, lumbar lordosis, sacral inclination, lumbosacral angle, slip angle, S2 inclination, PI, L5 inclination, L5 slope, PT, and SS were performed. T test and Pearson correlation analysis were conducted to compare and analyze each measurement. As for the comparison between the 2 groups, a statistically great significance in the level of displacement by Meyerding method, lumbosacral angle, slip angle, L5 incidence, PI, and L5 slope (P≤0.001) was shown. Meanwhile, a statistical significance in the sacral inclination and PT (P<0.05) was also shown. However, no statistical significance in the S2 incidence and SS was shown. A correlation of the level of displacement by Meyerding method with each parameter was analyzed in the both the groups. A high correlation was observed in the lumbar lordosis, lumbosacral angle, slip angle, L5 incidence, and L5 slope (Pearson correlation coefficient, P=0.01), as well as the sacral inclination, PI, and PT (Pearson correlation coefficient, P=0.05). Meanwhile, no correlation was shown in the S2 incidence and SS. A significant difference in the lumbosacral angle, slip angle, L5 incidence, PI, L5 slope, sacral inclination, and PT was shown between the patients with high-grade spondylolisthesis and patients with low-grade spondylolisthesis. Among the aforementioned measurements, the PI showed a significant difference between the 2 groups and also had a significant correlation with the dislocation level in all the patients.
NASA Astrophysics Data System (ADS)
Cinar, A. F.; Barhli, S. M.; Hollis, D.; Flansbjer, M.; Tomlinson, R. A.; Marrow, T. J.; Mostafavi, M.
2017-09-01
Digital image correlation has been routinely used to measure full-field displacements in many areas of solid mechanics, including fracture mechanics. Accurate segmentation of the crack path is needed to study its interaction with the microstructure and stress fields, and studies of crack behaviour, such as the effect of closure or residual stress in fatigue, require data on its opening displacement. Such information can be obtained from any digital image correlation analysis of cracked components, but it collection by manual methods is quite onerous, particularly for massive amounts of data. We introduce the novel application of Phase Congruency to detect and quantify cracks and their opening. Unlike other crack detection techniques, Phase Congruency does not rely on adjustable threshold values that require user interaction, and so allows large datasets to be treated autonomously. The accuracy of the Phase Congruency based algorithm in detecting cracks is evaluated and compared with conventional methods such as Heaviside function fitting. As Phase Congruency is a displacement-based method, it does not suffer from the noise intensification to which gradient-based methods (e.g. strain thresholding) are susceptible. Its application is demonstrated to experimental data for cracks in quasi-brittle (Granitic rock) and ductile (Aluminium alloy) materials.
Soil analysis based on sa,ples withdrawn from different volumes: correlation versus calibration
Lucian Weilopolski; Kurt Johnsen; Yuen Zhang
2010-01-01
Soil, particularly in forests, is replete with spatial variation with respect to soil C. Th e present standard chemical method for soil analysis by dry combustion (DC) is destructive, and comprehensive sampling is labor intensive and time consuming. Th ese, among other factors, are contributing to the development of new methods for soil analysis. Th ese include a near...
TNSPackage: A Fortran2003 library designed for tensor network state methods
NASA Astrophysics Data System (ADS)
Dong, Shao-Jun; Liu, Wen-Yuan; Wang, Chao; Han, Yongjian; Guo, G.-C.; He, Lixin
2018-07-01
Recently, the tensor network states (TNS) methods have proven to be very powerful tools to investigate the strongly correlated many-particle physics in one and two dimensions. The implementation of TNS methods depends heavily on the operations of tensors, including contraction, permutation, reshaping tensors, SVD and so on. Unfortunately, the most popular computer languages for scientific computation, such as Fortran and C/C++ do not have a standard library for such operations, and therefore make the coding of TNS very tedious. We develop a Fortran2003 package that includes all kinds of basic tensor operations designed for TNS. It is user-friendly and flexible for different forms of TNS, and therefore greatly simplifies the coding work for the TNS methods.
Bishara, Anthony J; Hittner, James B
2012-09-01
It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.
Application of selected methods of remote sensing for detecting carbonaceous water pollution
NASA Technical Reports Server (NTRS)
Davis, E. M.; Fosbury, W. J.
1973-01-01
A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.
NASA Astrophysics Data System (ADS)
Tsogbayar, Tsednee; Yeager, Danny L.
2017-01-01
We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Chen, G; Wong, P; Cooks, R G
1997-09-01
Substituted 1,2-diphenylethanes undergo competitive dissociations upon electron ionization (EI) to generate substituted benzyl cation and benzyl radical pairs. Application of the kinetic method to the previous reported EI mass spectra of these covalently bound precursor ions (data are taken from McLafferty et al. J. Am. Chem. Soc. 1970, 92, 6867)) is used to estimate the ionization energies of substituted benzyl free radicals. A correlation is observed between the Hammett σ constant of the substituents and the kinetic method parameter, ln(k(x)/k(H)), where k(x) is the rate of fragmentation to give the substituted product ion and k(H) is the rate to give the benzyl ion itself. Systems involving weakly bound cluster ions, including proton-bound dimers of meta- and para-substituted pyridines and meta- and para-substituted anilines, and electron-bound dimers of meta- and para-substituted nitrobenzenes, also show good correlations between the kinetic method parameter and the Hammett σ constant.
Development of new methodologies for evaluating the energy performance of new commercial buildings
NASA Astrophysics Data System (ADS)
Song, Suwon
The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.
2017-07-01
targeting PRCAT47. ASOs were able to knock down PRCAT47 at high efficacy. Genes regulated upon ASO-mediated knockdown are highly correlated with...PRCAT47. ASOs were able to knock down PRCAT47 at high efficacy. Genes regulated upon ASO-mediated knockdown are highly correlated with that of siRNA...The following section will highlight the progress made in each sub-aims/ tasks proposed in the grant, including a detailed description of methods
Mingus Discontinuous Multiphysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pat Notz, Dan Turner
Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model
NASA Astrophysics Data System (ADS)
Zhang, Fan; Liu, Pinkuan
2018-04-01
In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
Li, Chen; Poplawsky, Jonathan; Yan, Yanfa; ...
2017-07-01
Here in this paper we review a systematic study of the structure-property correlations of a series of defects in CdTe solar cells. A variety of experimental methods, including aberration-corrected scanning transmission electron microscopy, electron energy loss spectroscopy, energy dispersive X-ray spectroscopy, and electron-beam-induced current have been combined with density-functional theory. The research traces the connections between the structures and electrical activities of individual defects including intra-grain partial dislocations, grain boundaries and the CdTe/CdS interface. The interpretations of the physical origin underlying the structure-property correlation provide insights that should further the development of future CdTe solar cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chen; Poplawsky, Jonathan; Yan, Yanfa
Here in this paper we review a systematic study of the structure-property correlations of a series of defects in CdTe solar cells. A variety of experimental methods, including aberration-corrected scanning transmission electron microscopy, electron energy loss spectroscopy, energy dispersive X-ray spectroscopy, and electron-beam-induced current have been combined with density-functional theory. The research traces the connections between the structures and electrical activities of individual defects including intra-grain partial dislocations, grain boundaries and the CdTe/CdS interface. The interpretations of the physical origin underlying the structure-property correlation provide insights that should further the development of future CdTe solar cells.
NASA Astrophysics Data System (ADS)
Kim, Sungho; Choi, Byungin; Kim, Jieun; Kwon, Soon; Kim, Kyung-Tae
2012-05-01
This paper presents a separate spatio-temporal filter based small infrared target detection method to address the sea-based infrared search and track (IRST) problem in dense sun-glint environment. It is critical to detect small infrared targets such as sea-skimming missiles or asymmetric small ships for national defense. On the sea surface, sun-glint clutters degrade the detection performance. Furthermore, if we have to detect true targets using only three images with a low frame rate camera, then the problem is more difficult. We propose a novel three plot correlation filter and statistics based clutter reduction method to achieve robust small target detection rate in dense sun-glint environment. We validate the robust detection performance of the proposed method via real infrared test sequences including synthetic targets.
Flammability Indices for Refrigerants
NASA Astrophysics Data System (ADS)
Kataoka, Osami
This paper introduces a new index to classify flammable refrigerants. A question on flammability indices that ASHRAE employs arose from combustion test results of R152a and ammonia. Conventional methods of not only ASHRAE but also ISO and Japanese High-pressure gas safety law to classify the flammability of refrigerants are evaluated to show why these methods conflict with the test results. The key finding of this paper is that the ratio of stoichiometric concentration to LFL concentration (R factor) represents the test results most precisely. In addition, it has excellent correlation with other flammability parameters such as flame speed and pressure rise coefficient. Classification according to this index gives reasonable flammability order of substances including ammonia, R152a and carbon monoxide. Theoretical background why this index gives good correlation is also discussed as well as the insufficient part of this method.
Photovoltaics radiometric issues and needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, D.R.
1995-11-01
This paper presents a summary of issues discussed at the photovoltaic radiometric measurements workshop. Topics included radiometric measurements guides, the need for well-defined goals, documentation, calibration checks, accreditation of testing laboratories and methods, the need for less expensive radiometric instrumentation, data correlations, and quality assurance.
Spatial Correlation Of Streamflows: An Analytical Approach
NASA Astrophysics Data System (ADS)
Betterle, A.; Schirmer, M.; Botter, G.
2016-12-01
The interwoven space and time variability of climate and landscape properties results in complex and non-linear hydrological response of streamflow dynamics. Understanding how meteorologic and morphological characteristics of catchments affect similarity/dissimilarity of streamflow timeseries at their outlets represents a scientific challenge with application in water resources management, ecological studies and regionalization approaches aimed to predict streamflows in ungauged areas. In this study, we establish an analytical approach to estimate the spatial correlation of daily streamflows in two arbitrary locations within a given hydrologic district or river basin at seasonal and annual time scales. The method is based on a stochastic description of the coupled streamflow dynamics at the outlet of two catchments. The framework aims to express the correlation of daily streamflows at two locations along a river network as a function of a limited number of physical parameters characterizing the main underlying hydrological drivers, that include climate conditions, precipitation regime and catchment drainage rates. The proposed method portrays how heterogeneity of climate and landscape features affect the spatial variability of flow regimes along river systems. In particular, we show that frequency and intensity of synchronous effective rainfall events in the relevant contributing catchments are the main driver of the spatial correlation of daily discharge, whereas only pronounced differences in the drainage rate of the two basins bear a significant effect on the streamflow correlation. The topological arrangement of the two outlets also influences the underlying streamflow correlation, as we show that nested catchments tend to maximize the spatial correlation of flow regimes. The application of the method to a set of catchments in the South-Eastern US suggests the potential of the proposed tool for the characterization of spatial connections of flow regimes in the absence of discharge measurements.
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Criteria for Mitral Regurgitation Classification were inadequate for Dilated Cardiomyopathy
Mancuso, Frederico José Neves; Moisés, Valdir Ambrosio; Almeida, Dirceu Rodrigues; Oliveira, Wercules Antonio; Poyares, Dalva; Brito, Flavio Souza; de Paola, Angelo Amato Vincenzo; Carvalho, Antonio Carlos Camargo; Campos, Orlando
2013-01-01
Background Mitral regurgitation (MR) is common in patients with dilated cardiomyopathy (DCM). It is unknown whether the criteria for MR classification are inadequate for patients with DCM. Objective We aimed to evaluate the agreement among the four most common echocardiographic methods for MR classification. Methods Ninety patients with DCM were included. Functional MR was classified using four echocardiographic methods: color flow jet area (JA), vena contracta (VC), effective regurgitant orifice area (ERO) and regurgitant volume (RV). MR was classified as mild, moderate or important according to the American Society of Echocardiography criteria and by dividing the values into terciles. The Kappa test was used to evaluate whether the methods agreed, and the Pearson correlation coefficient was used to evaluate the correlation between the absolute values of each method. Results MR classification according to each method was as follows: JA: 26 mild, 44 moderate, 20 important; VC: 12 mild, 72 moderate, 6 important; ERO: 70 mild, 15 moderate, 5 important; RV: 70 mild, 16 moderate, 4 important. The agreement was poor among methods (kappa = 0.11; p < 0.001). It was observed a strong correlation between the absolute values of each method, ranging from 0.70 to 0.95 (p < 0.01) and the agreement was higher when values were divided into terciles (kappa = 0.44; p < 0.01) Conclusion The use of conventional echocardiographic criteria for MR classification seems inadequate in patients with DCM. It is necessary to establish new cutoff values for MR classification in these patients. PMID:24100692
Park, Jaeyong; Lee, Sang Gil; Bae, Jongjin; Lee, Jung Chul
2015-12-01
[Purpose] This study aimed to provide a predictable evaluation method for the progression of scoliosis in adolescents based on quick and reliable measurements using the naked eye, such as the calcaneal valgus angle of the foot, which can be performed at public facilities such as schools. [Subjects and Methods] Idiopathic scoliosis patients with a Cobb's angle of 10° or more (96 females, 22 males) were included in this study. To identify relationships between factors, Pearson's product-moment correlation coefficient was computed. The degree of scoliosis was set as a dependent variable to predict thoracic and lumbar scoliosis using ankle angle and physique factors. Height, weight, and left and right calcaneal valgus angles were set as independent variables; thereafter, multiple regression analysis was performed. This study extracted variables at a significance level (α) of 0.05 by applying a stepwise method, and calculated a regression equation. [Results] Negative correlation (R=-0.266) was shown between lumbar lordosis and asymmetrical lumbar rotation angles. A correlation (R=0.281) was also demonstrated between left calcaneal valgus angles and asymmetrical thoracic rotation angles. [Conclusion] Prediction of scoliosis progress was revealed to be possible through ocular inspection of the calcaneus and Adams forward bending test and the use of a scoliometer.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
Ivanic, Joseph; Schmidt, Michael W
2018-06-04
A novel hybrid correlation energy (HyCE) approach is proposed that determines the total correlation energy via distinct computation of its internal and external components. This approach evolved from two related studies. First, rigorous assessment of the accuracies and size extensivities of a number of electron correlation methods, that include perturbation theory (PT2), coupled-cluster (CC), configuration interaction (CI), and coupled electron pair approximation (CEPA), shows that the CEPA(0) variant of the latter and triples-corrected CC methods consistently perform very similarly. These findings were obtained by comparison to near full CI results for four small molecules and by charting recovered correlation energies for six steadily growing chain systems. Second, by generating valence virtual orbitals (VVOs) and utilizing the CEPA(0) method, we were able to partition total correlation energies into internal (or nondynamic) and external (or dynamic) parts for the aforementioned six chain systems and a benchmark test bed of 36 molecules. When using triple-ζ basis sets it was found that per orbital internal correlation energies were appreciably larger than per orbital external energies and that the former showed far more chemical variation than the latter. Additionally, accumulations of external correlation energies were seen to proceed smoothly, and somewhat linearly, as the virtual space is gradually increased. Combination of these two studies led to development of the HyCE approach, whereby the internal and external correlation energies are determined separately by CEPA(0)/VVO and PT2/external calculations, respectively. When applied to the six chain systems and the 36-molecule benchmark test set it was found that HyCE energies followed closely those of triples-corrected CC and CEPA(0) while easily outperforming MP2 and CCSD. The success of the HyCE approach is more notable when considering that its cost is only slightly more than MP2 and significantly cheaper than the CC approaches.
Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results
NASA Technical Reports Server (NTRS)
Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul
1992-01-01
The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.
Correlation among auto-refractor, wavefront aberration, and subjective manual refraction
NASA Astrophysics Data System (ADS)
Li, Qi; Ren, Qiushi
2005-01-01
Three optometry methods which include auto-refractor, wavefront aberrometer and subjective manual refraction were studied and compared in measuring low order aberrations of 60 people"s 117 normal eyes. Paired t-test and linear regression were used to study these three methods" relationship when measuring myopia with astigmatism. In order to make the analysis more clear, we divided the 117 normal eyes into different groups according to their subjective manual refraction and redid the statistical analysis. Correlations among three methods show significant in sphere, cylinder and axis in all groups, with sphere"s correlation coefficients largest(R>0.98, P<0.01) and cylinder"s smallest (0.90
Yavin, Daniel; Luu, Judy; James, Matthew T; Roberts, Derek J; Sutherland, Garnette R; Jette, Nathalie; Wiebe, Samuel
2014-09-01
Because clinical examination and imaging may be unreliable indicators of intracranial hypertension, intraocular pressure (IOP) measurement has been proposed as a noninvasive method of diagnosis. The authors conducted a systematic review and meta-analysis to determine the correlation between IOP and intracranial pressure (ICP) and the diagnostic accuracy of IOP measurement for detection of intracranial hypertension. The authors searched bibliographic databases (Ovid MEDLINE, Ovid EMBASE, and the Cochrane Central Register of Controlled Trials) from 1950 to March 2013, references of included studies, and conference abstracts for studies comparing IOP and invasive ICP measurement. Two independent reviewers screened abstracts, reviewed full-text articles, and extracted data. Correlation coefficients, sensitivity, specificity, and positive and negative likelihood ratios were calculated using DerSimonian and Laird methods and bivariate random effects models. The I(2) statistic was used as a measure of heterogeneity. Among 355 identified citations, 12 studies that enrolled 546 patients were included in the meta-analysis. The pooled correlation coefficient between IOP and ICP was 0.44 (95% CI 0.26-0.63, I(2) = 97.7%, p < 0.001). The summary sensitivity and specificity for IOP for diagnosing intracranial hypertension were 81% (95% CI 26%-98%, I(2) = 95.2%, p < 0.01) and 95% (95% CI 43%-100%, I(2) = 97.7%, p < 0.01), respectively. The summary positive and negative likelihood ratios were 14.8 (95% CI 0.5-417.7) and 0.2 (95% CI 0.02-1.7), respectively. When ICP and IOP measurements were taken within 1 hour of another, correlation between the measures improved. Although a modest aggregate correlation was found between IOP and ICP, the pooled diagnostic accuracy suggests that IOP measurement may be of clinical utility in the detection of intracranial hypertension. Given the significant heterogeneity between included studies, further investigation is required prior to the adoption of IOP in the evaluation of intracranial hypertension into routine practice.
Teixeira, Fernando Borge; Ramalho Júnior, Amancio; Morais Filho, Mauro César de; Speciali, Danielli Souza; Kawamura, Catia Miyuki; Lopes, José Augusto Fernandes; Blumetti, Francesco Camara
2018-01-01
Objective To evaluate the correlation between physical examination data concerning hip rotation and tibial torsion with transverse plane kinematics in children with cerebral palsy; and to determine which time points and events of the gait cycle present higher correlation with physical examination findings. Methods A total of 195 children with cerebral palsy seen at two gait laboratories from 2008 and 2016 were included in this study. Physical examination measurements included internal hip rotation, external hip rotation, mid-point hip rotation and the transmalleolar axis angle. Six kinematic parameters were selected for each segment to assess hip rotation and shank-based foot rotation. Correlations between physical examination and kinematic measures were analyzed by Spearman correlation coefficients, and a significance level of 5% was considered. Results Comparing physical examination measurements of hip rotation and hip kinematics, we found moderate to strong correlations for all variables (p<0.001). The highest coefficients were seen between the mid-point hip rotation on physical examination and hip rotation kinematics (rho range: 0.48-0.61). Moderate correlations were also found between the transmalleolar axis angle measurement on physical examination and foot rotation kinematics (rho range 0.44-0.56; p<0.001). Conclusion These findings may have clinical implications in the assessment and management of transverse plane gait deviations in children with cerebral palsy.
HydroClimATe: hydrologic and climatic analysis toolkit
Dickinson, Jesse; Hanson, Randall T.; Predmore, Steven K.
2014-01-01
The potential consequences of climate variability and climate change have been identified as major issues for the sustainability and availability of the worldwide water resources. Unlike global climate change, climate variability represents deviations from the long-term state of the climate over periods of a few years to several decades. Currently, rich hydrologic time-series data are available, but the combination of data preparation and statistical methods developed by the U.S. Geological Survey as part of the Groundwater Resources Program is relatively unavailable to hydrologists and engineers who could benefit from estimates of climate variability and its effects on periodic recharge and water-resource availability. This report documents HydroClimATe, a computer program for assessing the relations between variable climatic and hydrologic time-series data. HydroClimATe was developed for a Windows operating system. The software includes statistical tools for (1) time-series preprocessing, (2) spectral analysis, (3) spatial and temporal analysis, (4) correlation analysis, and (5) projections. The time-series preprocessing tools include spline fitting, standardization using a normal or gamma distribution, and transformation by a cumulative departure. The spectral analysis tools include discrete Fourier transform, maximum entropy method, and singular spectrum analysis. The spatial and temporal analysis tool is empirical orthogonal function analysis. The correlation analysis tools are linear regression and lag correlation. The projection tools include autoregressive time-series modeling and generation of many realizations. These tools are demonstrated in four examples that use stream-flow discharge data, groundwater-level records, gridded time series of precipitation data, and the Multivariate ENSO Index.
Theoretical study of the electric dipole moment function of the ClO molecule
NASA Technical Reports Server (NTRS)
Pettersson, L. G. M.; Langhoff, S. R.; Chong, D. P.
1986-01-01
The potential energy function and electric dipole moment function (EDMF) are computed for ClO X 2Pi using several different techniques to include electron correlation. The EDMF is used to compute Einstein coefficients, vibrational lifetimes, and dipole moments in higher vibrational levels. The band strength of the 1-0 fundamental transition is computed to be 12 + or - 2 per sq cm atm determined from infrared heterodyne spectroscopy. The theoretical methods used include SCF, CASSCF, multireference singles plus doubles configuration interaction (MRCI) and contracted CI, coupled pair functional (CPF), and a modified version of the CPF method. The results obtained using the different methods are critically compared.
Comparison of Soil Quality Index Using Three Methods
Mukherjee, Atanu; Lal, Rattan
2014-01-01
Assessment of management-induced changes in soil quality is important to sustaining high crop yield. A large diversity of cultivated soils necessitate identification development of an appropriate soil quality index (SQI) based on relative soil properties and crop yield. Whereas numerous attempts have been made to estimate SQI for major soils across the World, there is no standard method established and thus, a strong need exists for developing a user-friendly and credible SQI through comparison of various available methods. Therefore, the objective of this article is to compare three widely used methods to estimate SQI using the data collected from 72 soil samples from three on-farm study sites in Ohio. Additionally, challenge lies in establishing a correlation between crop yield versus SQI calculated either depth wise or in combination of soil layers as standard methodology is not yet available and was not given much attention to date. Predominant soils of the study included one organic (Mc), and two mineral (CrB, Ko) soils. Three methods used to estimate SQI were: (i) simple additive SQI (SQI-1), (ii) weighted additive SQI (SQI-2), and (iii) statistically modeled SQI (SQI-3) based on principal component analysis (PCA). The SQI varied between treatments and soil types and ranged between 0–0.9 (1 being the maximum SQI). In general, SQIs did not significantly differ at depths under any method suggesting that soil quality did not significantly differ for different depths at the studied sites. Additionally, data indicate that SQI-3 was most strongly correlated with crop yield, the correlation coefficient ranged between 0.74–0.78. All three SQIs were significantly correlated (r = 0.92–0.97) to each other and with crop yield (r = 0.65–0.79). Separate analyses by crop variety revealed that correlation was low indicating that some key aspects of soil quality related to crop response are important requirements for estimating SQI. PMID:25148036
Method of determining pH by the alkaline absorption of carbon dioxide
Hobbs, David T.
1992-01-01
A method for measuring the concentration of hydroxides in alkaline solutions in a remote location using the tendency of hydroxides to absorb carbon dioxide. The method includes the passing of carbon dioxide over the surface of an alkaline solution in a remote tank before and after measurements of the carbon dioxide solution. A comparison of the measurements yields the absorption fraction from which the hydroxide concentration can be calculated using a correlation of hydroxide or pH to absorption fraction.
NASA Astrophysics Data System (ADS)
Bernard, Rémi N.; Robledo, Luis M.; Rodríguez, Tomás R.
2016-06-01
We study the interplay of quadrupole and octupole degrees of freedom in the structure of the isotope 144Ba. A symmetry-conserving configuration-mixing method (SCCM) based on a Gogny energy density functional (EDF) has been used. The method includes particle number, parity, and angular momentum restoration as well as axial quadrupole and octupole shape mixing within the generator coordinate method. Predictions both for excitation energies and electromagnetic transition probabilities are in good agreement with the most recent experimental data.
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beste, Ariana; Vazquez-Mayagoitia, Alvaro; Ortiz, J. Vincent
2013-01-01
A direct method (D-Delta-MBPT(2)) to calculate second-order ionization potentials (IPs), electron affinities (EAs), and excitation energies is developed. The Delta-MBPT(2) method is defined as the correlated extension of the Delta-HF method. Energy differences are obtained by integrating the energy derivative with respect to occupation numbers over the appropriate parameter range. This is made possible by writing the second-order energy as a function of the occupation numbers. Relaxation effects are fully included at the SCF level. This is in contrast to linear response theory, which makes the D-Delta-MBPT(2) applicable not only to single excited but also higher excited states. We showmore » the relationship of the D-Delta-MBPT(2) method for IPs and EAs to a second-order approximation of the effective Fock-space coupled-cluster Hamiltonian and a second-order electron propagator method. We also discuss the connection between the D-Delta-MBPT(2) method for excitation energies and the CIS-MP2 method. Finally, as a proof of principle, we apply our method to calculate ionization potentials and excitation energies of some small molecules. For IPs, the Delta-MBPT(2) results compare well to the second-order solution of the Dyson equation. For excitation energies, the deviation from EOM-CCSD increases when correlation becomes more important. When using the numerical integration technique, we encounter difficulties that prevented us from reaching the Delta-MBPT(2) values. Most importantly, relaxation beyond the Hartree Fock level is significant and needs to be included in future research.« less
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
Saitow, Masaaki; Kurashige, Yuki; Yanai, Takeshi
2013-07-28
We report development of the multireference configuration interaction (MRCI) method that can use active space scalable to much larger size references than has previously been possible. The recent development of the density matrix renormalization group (DMRG) method in multireference quantum chemistry offers the ability to describe static correlation in a large active space. The present MRCI method provides a critical correction to the DMRG reference by including high-level dynamic correlation through the CI treatment. When the DMRG and MRCI theories are combined (DMRG-MRCI), the full internal contraction of the reference in the MRCI ansatz, including contraction of semi-internal states, plays a central role. However, it is thought to involve formidable complexity because of the presence of the five-particle rank reduced-density matrix (RDM) in the Hamiltonian matrix elements. To address this complexity, we express the Hamiltonian matrix using commutators, which allows the five-particle rank RDM to be canceled out without any approximation. Then we introduce an approximation to the four-particle rank RDM by using a cumulant reconstruction from lower-particle rank RDMs. A computer-aided approach is employed to derive the exceedingly complex equations of the MRCI in tensor-contracted form and to implement them into an efficient parallel computer code. This approach extends to the size-consistency-corrected variants of MRCI, such as the MRCI+Q, MR-ACPF, and MR-AQCC methods. We demonstrate the capability of the DMRG-MRCI method in several benchmark applications, including the evaluation of single-triplet gap of free-base porphyrin using 24 active orbitals.
Awais, Muhammad; Nadeem, Naila; Husen, Yousuf; Rehman, Abdul; Beg, Madiha; Khattak, Yasir Jamil
2014-12-01
To compare Greulich-Pyle (GP) and Girdany-Golden (GG) methods for estimation of Skeletal Age (SA) in children referred to a tertiary care hospital in Karachi, Pakistan. Cross-sectional study. Department of Radiology, The Aga Khan University Hospital, Karachi, Pakistan, from July 2010 to June 2012. Children up to the age of 18 years, who had undergone X-ray for the evaluation of trauma were included. Each X-ray was interpreted using both methods by two consultant paediatric radiologists having at least 10 years experience, who were blinded to the actual Chronologic Age (CA) of children. A total of 283 children were included. No significant difference was noted in mean SA estimated by GP method and mean CA for female children (p=0.695). However, a significant difference was noted between mean CA and mean SA by GG method for females (p=0.011). For males, there was a significant difference between mean CA and mean SA estimated by both GP and GG methods. A stronger correlation was found between CA and SA estimated by GP method (r=0.943 for girls, r=0.915 for boys) as compared to GG method (r=0.909 for girls, r=0.865 for boys) respectively. Bland- Altman analysis also revealed that the two methods cannot be used interchangeably. Excellent correlation was seen between the two readers for both GP and GG methods. There was no additional benefit of using GP and GG methods simultaneously over using GP method alone. Moreover, although GP was reliable in estimating SA in girls, it was unable to accurately assess SA in boys. Therefore, it would be ideal to develop indigenous standards of bone age estimation based on a representative sample of healthy native children.
Brorsen, Kurt R; Yang, Yang; Hammes-Schiffer, Sharon
2017-08-03
Nuclear quantum effects such as zero point energy play a critical role in computational chemistry and often are included as energetic corrections following geometry optimizations. The nuclear-electronic orbital (NEO) multicomponent density functional theory (DFT) method treats select nuclei, typically protons, quantum mechanically on the same level as the electrons. Electron-proton correlation is highly significant, and inadequate treatments lead to highly overlocalized nuclear densities. A recently developed electron-proton correlation functional, epc17, has been shown to provide accurate nuclear densities for molecular systems. Herein, the NEO-DFT/epc17 method is used to compute the proton affinities for a set of molecules and to examine the role of nuclear quantum effects on the equilibrium geometry of FHF - . The agreement of the computed results with experimental and benchmark values demonstrates the promise of this approach for including nuclear quantum effects in calculations of proton affinities, pK a 's, optimized geometries, and reaction paths.
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
Duell, Lowell F. W.
1990-01-01
In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared with other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy-budget measurements. Penman-combination potential-ET estimates were determined to be unusable because they overestimated actual ET. Modification of the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods described in this report may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix of this report. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 301 millimeters at a low-density scrub site to 1,137 millimeters at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied.
Prediction of unsteady separated flows on oscillating airfoils
NASA Technical Reports Server (NTRS)
Mccroskey, W. J.
1978-01-01
Techniques for calculating high Reynolds number flow around an airfoil undergoing dynamic stall are reviewed. Emphasis is placed on predicting the values of lift, drag, and pitching moments. Methods discussed include: the discrete potential vortex method; thin boundary layer method; strong interaction between inviscid and viscous flows; and solutions to the Navier-Stokes equations. Empirical methods for estimating unsteady airloads on oscillating airfoils are also described. These methods correlate force and moment data from wind tunnel tests to indicate the effects of various parameters, such as airfoil shape, Mach number, amplitude and frequency of sinosoidal oscillations, mean angle, and type of motion.
Activity recognition using Video Event Segmentation with Text (VEST)
NASA Astrophysics Data System (ADS)
Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge
2014-06-01
Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.
CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.
2006-01-01
This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.
Chang, Luye; Connelly, Brian S; Geeza, Alexis A
2012-02-01
Though most personality researchers now recognize that ratings of the Big Five are not orthogonal, the field has been divided about whether these trait intercorrelations are substantive (i.e., driven by higher order factors) or artifactual (i.e., driven by correlated measurement error). We used a meta-analytic multitrait-multirater study to estimate trait correlations after common method variance was controlled. Our results indicated that common method variance substantially inflates trait correlations, and, once controlled, correlations among the Big Five became relatively modest. We then evaluated whether two different theories of higher order factors could account for the pattern of Big Five trait correlations. Our results did not support Rushton and colleagues' (Rushton & Irwing, 2008; Rushton et al., 2009) proposed general factor of personality, but Digman's (1997) α and β metatraits (relabeled by DeYoung, Peterson, and Higgins (2002) as Stability and Plasticity, respectively) produced viable fit. However, our models showed considerable overlap between Stability and Emotional Stability and between Plasticity and Extraversion, raising the question of whether these metatraits are redundant with their dominant Big Five traits. This pattern of findings was robust when we included only studies whose observers were intimately acquainted with targets. Our results underscore the importance of using a multirater approach to studying personality and the need to separate the causes and outcomes of higher order metatraits from those of the Big Five. We discussed the implications of these findings for the array of research fields in which personality is studied.
Correlation of spacecraft thermal mathematical models to reference data
NASA Astrophysics Data System (ADS)
Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier
2018-03-01
Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.
Counting the peaks in the excitation function for precompound processes
NASA Astrophysics Data System (ADS)
Bonetti, R.; Hussein, M. S.; Mello, P. A.
1983-08-01
The "counting of maxima" method of Brink and Stephen, conventionally used for the extraction of the correlation width of statistical (compound nucleus) reactions, is generalized to include precompound processes as well. It is found that this method supplies an important independent check of the results obtained from autocorrelation studies. An application is made to the reaction 25Mg(3He,p). NUCLEAR REACTIONS Statistical multistep compound processes discussed.
2006-10-01
determined by imaging correlate well with those determined by immunoassay methods on surgical biopsies. Because of the short half-life of fluorine -18, this...immunoassay methods on surgical biopsies. Currently, the most effective ER imaging agent is a fluorine -18 labeled estrogen. However, because of the short...substituent to the central pentacycle, including nucleophilic addition of organometallic reagents, addition of electrophiles to the cyclopentadiene
NASA Astrophysics Data System (ADS)
Shinogle-Decker, Heather; Martinez-Rivera, Noraida; O'Brien, John; Powell, Richard D.; Joshi, Vishwas N.; Connell, Samuel; Rosa-Molinar, Eduardo
2018-02-01
A new correlative Förster Resonance Energy Transfer (FRET) microscopy method using FluoroNanogold™, a fluorescent immunoprobe with a covalently attached Nanogold® particle (1.4nm Au), overcomes resolution limitations in determining distances within synaptic nanoscale architecture. FRET by acceptor photobleaching has long been used as a method to increase fluorescence resolution. The transfer of energy from a donor to an acceptor generally occurs between 10-100Å, which is the relative distance between the donor molecule and the acceptor molecule. For the correlative FRET microscopy method using FluoroNanogold™, we immuno-labeled GFP-tagged-HeLa-expressing Connexin 35 (Cx35) with anti-GFP and with anti-Cx35/36 antibodies, and then photo-bleached the Cx before processing the sample for electron microscopic imaging. Preliminary studies reveal the use of Alexa Fluor® 594 FluoroNanogold™ slightly increases FRET distance to 70Å, in contrast to the 62.5Å using AlexaFluor 594®. Preliminary studies also show that using a FluoroNanogold™ probe inhibits photobleaching. After one photobleaching session, Alexa Fluor 594® fluorescence dropped to 19% of its original fluorescence; in contrast, after one photobleaching session, Alexa Fluor 594® FluoroNanogold™ fluorescence dropped to 53% of its original intensity. This result confirms that Alexa Fluor 594® FluoroNanogold™ is a much better donor probe than is Alexa Fluor 594®. The new method (a) creates a double confirmation method in determining structure and orientation of synaptic architecture, (b) allows development of a two-dimensional in vitro model to be used for precise testing of multiple parameters, and (c) increases throughput. Future work will include development of FluoroNanogold™ probes with different sizes of gold for additional correlative microscopy studies.
Safo, Sandra E; Li, Shuzhao; Long, Qi
2018-03-01
Integrative analysis of high dimensional omics data is becoming increasingly popular. At the same time, incorporating known functional relationships among variables in analysis of omics data has been shown to help elucidate underlying mechanisms for complex diseases. In this article, our goal is to assess association between transcriptomic and metabolomic data from a Predictive Health Institute (PHI) study that includes healthy adults at a high risk of developing cardiovascular diseases. Adopting a strategy that is both data-driven and knowledge-based, we develop statistical methods for sparse canonical correlation analysis (CCA) with incorporation of known biological information. Our proposed methods use prior network structural information among genes and among metabolites to guide selection of relevant genes and metabolites in sparse CCA, providing insight on the molecular underpinning of cardiovascular disease. Our simulations demonstrate that the structured sparse CCA methods outperform several existing sparse CCA methods in selecting relevant genes and metabolites when structural information is informative and are robust to mis-specified structural information. Our analysis of the PHI study reveals that a number of gene and metabolic pathways including some known to be associated with cardiovascular diseases are enriched in the set of genes and metabolites selected by our proposed approach. © 2017, The International Biometric Society.
The Cross-Correlation and Reshuffling Tests in Discerning Induced Seismicity
NASA Astrophysics Data System (ADS)
Schultz, Ryan; Telesca, Luciano
2018-05-01
In recent years, cases of newly emergent induced clusters have increased seismic hazard and risk in locations with social, environmental, and economic consequence. Thus, the need for a quantitative and robust means to discern induced seismicity has become a critical concern. This paper reviews a Matlab-based algorithm designed to quantify the statistical confidence between two time-series datasets. Similar to prior approaches, our method utilizes the cross-correlation to delineate the strength and lag of correlated signals. In addition, use of surrogate reshuffling tests allows for the dynamic testing against statistical confidence intervals of anticipated spurious correlations. We demonstrate the robust nature of our algorithm in a suite of synthetic tests to determine the limits of accurate signal detection in the presence of noise and sub-sampling. Overall, this routine has considerable merit in terms of delineating the strength of correlated signals, one of which includes the discernment of induced seismicity from natural.
Quasiparticle energies and lifetimes in a metallic chain model of a tunnel junction.
Szepieniec, Mark; Yeriskin, Irene; Greer, J C
2013-04-14
As electronics devices scale to sub-10 nm lengths, the distinction between "device" and "electrodes" becomes blurred. Here, we study a simple model of a molecular tunnel junction, consisting of an atomic gold chain partitioned into left and right electrodes, and a central "molecule." Using a complex absorbing potential, we are able to reproduce the single-particle energy levels of the device region including a description of the effects of the semi-infinite electrodes. We then use the method of configuration interaction to explore the effect of correlations on the system's quasiparticle peaks. We find that when excitations on the leads are excluded, the device's highest occupied molecular orbital and lowest unoccupied molecular orbital quasiparticle states when including correlation are bracketed by their respective values in the Hartree-Fock (Koopmans) and ΔSCF approximations. In contrast, when excitations on the leads are included, the bracketing property no longer holds, and both the positions and the lifetimes of the quasiparticle levels change considerably, indicating that the combined effect of coupling and correlation is to alter the quasiparticle spectrum significantly relative to an isolated molecule.
Criteria for mitral regurgitation classification were inadequate for dilated cardiomyopathy.
Mancuso, Frederico José Neves; Moisés, Valdir Ambrosio; Almeida, Dirceu Rodrigues; Oliveira, Wercules Antonio; Poyares, Dalva; Brito, Flavio Souza; Paola, Angelo Amato Vincenzo de; Carvalho, Antonio Carlos Camargo; Campos, Orlando
2013-11-01
Mitral regurgitation (MR) is common in patients with dilated cardiomyopathy (DCM). It is unknown whether the criteria for MR classification are inadequate for patients with DCM. We aimed to evaluate the agreement among the four most common echocardiographic methods for MR classification. Ninety patients with DCM were included. Functional MR was classified using four echocardiographic methods: color flow jet area (JA), vena contracta (VC), effective regurgitant orifice area (ERO) and regurgitant volume (RV). MR was classified as mild, moderate or important according to the American Society of Echocardiography criteria and by dividing the values into terciles. The Kappa test was used to evaluate whether the methods agreed, and the Pearson correlation coefficient was used to evaluate the correlation between the absolute values of each method. MR classification according to each method was as follows: JA: 26 mild, 44 moderate, 20 important; VC: 12 mild, 72 moderate, 6 important; ERO: 70 mild, 15 moderate, 5 important; RV: 70 mild, 16 moderate, 4 important. The agreement was poor among methods (kappa=0.11; p<0.001). It was observed a strong correlation between the absolute values of each method, ranging from 0.70 to 0.95 (p<0.01) and the agreement was higher when values were divided into terciles (kappa = 0.44; p < 0.01) CONCLUSION: The use of conventional echocardiographic criteria for MR classification seems inadequate in patients with DCM. It is necessary to establish new cutoff values for MR classification in these patients.
Ultra-low-density genotype panels for breed assignment of Angus and Hereford cattle.
Judge, M M; Kelleher, M M; Kearney, J F; Sleator, R D; Berry, D P
2017-06-01
Angus and Hereford beef is marketed internationally for apparent superior meat quality attributes; DNA-based breed authenticity could be a useful instrument to ensure consumer confidence on premium meat products. The objective of this study was to develop an ultra-low-density genotype panel to accurately quantify the Angus and Hereford breed proportion in biological samples. Medium-density genotypes (13 306 single nucleotide polymorphisms (SNPs)) were available on 54 703 commercial and 4042 purebred animals. The breed proportion of the commercial animals was generated from the medium-density genotypes and this estimate was regarded as the gold-standard breed composition. Ten genotype panels (100 to 1000 SNPs) were developed from the medium-density genotypes; five methods were used to identify the most informative SNPs and these included the Delta statistic, the fixation (F st) statistic and an index of both. Breed assignment analyses were undertaken for each breed, panel density and SNP selection method separately with a programme to infer population structure using the entire 13 306 SNP panel (representing the gold-standard measure). Breed assignment was undertaken for all commercial animals (n=54 703), animals deemed to contain some proportion of Angus based on pedigree (n=5740) and animals deemed to contain some proportion of Hereford based on pedigree (n=5187). The predicted breed proportion of all animals from the lower density panels was then compared with the gold-standard breed prediction. Panel density, SNP selection method and breed all had a significant effect on the correlation of predicted and actual breed proportion. Regardless of breed, the Index method of SNP selection numerically (but not significantly) outperformed all other selection methods in accuracy (i.e. correlation and root mean square of prediction) when panel density was ⩾300 SNPs. The correlation between actual and predicted breed proportion increased as panel density increased. Using 300 SNPs (selected using the global index method), the correlation between predicted and actual breed proportion was 0.993 and 0.995 in the Angus and Hereford validation populations, respectively. When SNP panels optimised for breed prediction in one population were used to predict the breed proportion of a separate population, the correlation between predicted and actual breed proportion was 0.034 and 0.044 weaker in the Hereford and Angus populations, respectively (using the 300 SNP panel). It is necessary to include at least 300 to 400 SNPs (per breed) on genotype panels to accurately predict breed proportion from biological samples.
DMirNet: Inferring direct microRNA-mRNA association networks.
Lee, Minsu; Lee, HyungJune
2016-12-05
MicroRNAs (miRNAs) play important regulatory roles in the wide range of biological processes by inducing target mRNA degradation or translational repression. Based on the correlation between expression profiles of a miRNA and its target mRNA, various computational methods have previously been proposed to identify miRNA-mRNA association networks by incorporating the matched miRNA and mRNA expression profiles. However, there remain three major issues to be resolved in the conventional computation approaches for inferring miRNA-mRNA association networks from expression profiles. 1) Inferred correlations from the observed expression profiles using conventional correlation-based methods include numerous erroneous links or over-estimated edge weight due to the transitive information flow among direct associations. 2) Due to the high-dimension-low-sample-size problem on the microarray dataset, it is difficult to obtain an accurate and reliable estimate of the empirical correlations between all pairs of expression profiles. 3) Because the previously proposed computational methods usually suffer from varying performance across different datasets, a more reliable model that guarantees optimal or suboptimal performance across different datasets is highly needed. In this paper, we present DMirNet, a new framework for identifying direct miRNA-mRNA association networks. To tackle the aforementioned issues, DMirNet incorporates 1) three direct correlation estimation methods (namely Corpcor, SPACE, Network deconvolution) to infer direct miRNA-mRNA association networks, 2) the bootstrapping method to fully utilize insufficient training expression profiles, and 3) a rank-based Ensemble aggregation to build a reliable and robust model across different datasets. Our empirical experiments on three datasets demonstrate the combinatorial effects of necessary components in DMirNet. Additional performance comparison experiments show that DMirNet outperforms the state-of-the-art Ensemble-based model [1] which has shown the best performance across the same three datasets, with a factor of up to 1.29. Further, we identify 43 putative novel multi-cancer-related miRNA-mRNA association relationships from an inferred Top 1000 direct miRNA-mRNA association network. We believe that DMirNet is a promising method to identify novel direct miRNA-mRNA relations and to elucidate the direct miRNA-mRNA association networks. Since DMirNet infers direct relationships from the observed data, DMirNet can contribute to reconstructing various direct regulatory pathways, including, but not limited to, the direct miRNA-mRNA association networks.
Mott Transition of MnO under Pressure: A Comparison of Correlated Band Theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasinathan, Deepa; Kunes, Jan; Koepernik, K
The electronic structure, magnetic moment, and volume collapse of MnO under pressure are obtained from four different correlated band theory methods; local density approximation+Hubbard U (LDA+U), pseudopotential self-interaction correction (pseudo-SIC), the hybrid functional (combined local exchange plus Hartree-Fock exchange), and the local spin density SIC (SIC-LSD) method. Each method treats correlation among the five Mn 3d orbitals (per spin), including their hybridization with three O 2p orbitals in the valence bands and their changes with pressure. The focus is on comparison of the methods for rock salt MnO (neglecting the observed transition to the NiAs structure in the 90-100 GPamore » range). Each method predicts a first-order volume collapse, but with variation in the predicted volume and critical pressure. Accompanying the volume collapse is a moment collapse, which for all methods is from high-spin to low-spin ((5/2){yields}(1/2)), not to nonmagnetic as the simplest scenario would have. The specific manner in which the transition occurs varies considerably among the methods: pseudo-SIC and SIC-LSD give insulator-to-metal, while LDA+U gives insulator-to-insulator and the hybrid method gives an insulator-to-semimetal transition. Projected densities of states above and below the transition are presented for each of the methods and used to analyze the character of each transition. In some cases the rhombohedral symmetry of the antiferromagnetically ordered phase clearly influences the character of the transition.« less
Is automated kinetic measurement superior to end-point for advanced oxidation protein product?
Oguz, Osman; Inal, Berrin Bercik; Emre, Turker; Ozcan, Oguzhan; Altunoglu, Esma; Oguz, Gokce; Topkaya, Cigdem; Guvenen, Guvenc
2014-01-01
Advanced oxidation protein product (AOPP) was first described as an oxidative protein marker in chronic uremic patients and measured with a semi-automatic end-point method. Subsequently, the kinetic method was introduced for AOPP assay. We aimed to compare these two methods by adapting them to a chemistry analyzer and to investigate the correlation between AOPP and fibrinogen, the key molecule responsible for human plasma AOPP reactivity, microalbumin, and HbA1c in patients with type II diabetes mellitus (DM II). The effects of EDTA and citrate-anticogulated tubes on these two methods were incorporated into the study. This study included 93 DM II patients (36 women, 57 men) with HbA1c levels > or = 7%, who were admitted to the diabetes and nephrology clinics. The samples were collected in EDTA and in citrate-anticoagulated tubes. Both methods were adapted to a chemistry analyzer and the samples were studied in parallel. In both types of samples, we found a moderate correlation between the kinetic and the endpoint methods (r = 0.611 for citrate-anticoagulated, r = 0.636 for EDTA-anticoagulated, p = 0.0001 for both). We found a moderate correlation between fibrinogen-AOPP and microalbumin-AOPP levels only in the kinetic method (r = 0.644 and 0.520 for citrate-anticoagulated; r = 0.581 and 0.490 for EDTA-anticoagulated, p = 0.0001). We conclude that adaptation of the end-point method to automation is more difficult and it has higher between-run CV% while application of the kinetic method is easier and it may be used in oxidative stress studies.
Takahashi, Hiro; Honda, Hiroyuki
2006-07-01
Considering the recent advances in and the benefits of DNA microarray technologies, many gene filtering approaches have been employed for the diagnosis and prognosis of diseases. In our previous study, we developed a new filtering method, namely, the projective adaptive resonance theory (PART) filtering method. This method was effective in subclass discrimination. In the PART algorithm, the genes with a low variance in gene expression in either class, not both classes, were selected as important genes for modeling. Based on this concept, we developed novel simple filtering methods such as modified signal-to-noise (S2N') in the present study. The discrimination model constructed using these methods showed higher accuracy with higher reproducibility as compared with many conventional filtering methods, including the t-test, S2N, NSC and SAM. The reproducibility of prediction was evaluated based on the correlation between the sets of U-test p-values on randomly divided datasets. With respect to leukemia, lymphoma and breast cancer, the correlation was high; a difference of >0.13 was obtained by the constructed model by using <50 genes selected by S2N'. Improvement was higher in the smaller genes and such higher correlation was observed when t-test, NSC and SAM were used. These results suggest that these modified methods, such as S2N', have high potential to function as new methods for marker gene selection in cancer diagnosis using DNA microarray data. Software is available upon request.
Koh, Suk Bong; Park, Hye Jin
2009-12-01
The aim of this study was to investigate correlations between medical student scores on 4 examinations: the written examination, clinical clerkship examination, clinical skill assessment, and graduation examination. Scores for 51 students who entered Daegu Catholic Medical School in 2005 on the written examination, clinical clerkship examination, clinical skill assessment, and graduation examination were included. Correlations between the scores were analyzed statistically. The scores on the written examination showed a strong correlation with those of the clinical clerkship assessment (0.833) and graduation examination (0.821). The clinical clerkship assessment scores correlated significantly with graduation examination scores (0.907). In addition, clinical skill assessment scores correlated with the written examination (0.579), clinical clerkship examination (0.570), and graduation examination (0.465) scores. Overall, the correlation between the scores on the clinical clerkship examination and the written examination was more significant than the correlation between scores on the clinical clerkship examination and clinical skill assessment. Therefore, we need to improve the evaluation method for the clinical clerkship examination and clinical skill assessment.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
A Perfusion MRI Study of Emotional Valence and Arousal in Parkinson's Disease
Limsoontarakul, Sunsern; Campbell, Meghan C.; Black, Kevin J.
2011-01-01
Background. Brain regions subserving emotion have mostly been studied using functional magnetic resonance imaging (fMRI) during emotion provocation procedures in healthy participants. Objective. To identify neuroanatomical regions associated with spontaneous changes in emotional state over time. Methods. Self-rated emotional valence and arousal scores, and regional cerebral blood flow (rCBF) measured by perfusion MRI, were measured 4 or 8 times spanning at least 2 weeks in each of 21 subjects with Parkinson's disease (PD). A random-effects SPM analysis, corrected for multiple comparisons, identified significant clusters of contiguous voxels in which rCBF varied with valence or arousal. Results. Emotional valence correlated positively with rCBF in several brain regions, including medial globus pallidus, orbital prefrontal cortex (PFC), and white matter near putamen, thalamus, insula, and medial PFC. Valence correlated negatively with rCBF in striatum, subgenual cingulate cortex, ventrolateral PFC, and precuneus—posterior cingulate cortex (PCC). Arousal correlated positively with rCBF in clusters including claustrum-thalamus-ventral striatum and inferior parietal lobule and correlated negatively in clusters including posterior insula—mediodorsal thalamus and midbrain. Conclusion. This study demonstrates that the temporal stability of perfusion MRI allows within-subject investigations of spontaneous fluctuations in mental state, such as mood, over relatively long-time intervals. PMID:21969917
Co-occurrence correlations of heavy metals in sediments revealed using network analysis.
Liu, Lili; Wang, Zhiping; Ju, Feng; Zhang, Tong
2015-01-01
In this study, the correlation-based study was used to identify the co-occurrence correlations among metals in marine sediment of Hong Kong, based on the long-term (from 1991 to 2011) temporal and spatial monitoring data. 14 stations out of the total 45 marine sediment monitoring stations were selected from three representative areas, including Deep Bay, Victoria Harbour and Mirs Bay. Firstly, Spearman's rank correlation-based network analysis was conducted as the first step to identify the co-occurrence correlations of metals from raw metadata, and then for further analysis using the normalized metadata. The correlations patterns obtained by network were consistent with those obtained by the other statistic normalization methods, including annual ratios, R-squared coefficient and Pearson correlation coefficient. Both Deep Bay and Victoria Harbour have been polluted by heavy metals, especially for Pb and Cu, which showed strong co-occurrence with other heavy metals (e.g. Cr, Ni, Zn and etc.) and little correlations with the reference parameters (Fe or Al). For Mirs Bay, which has better marine sediment quality compared with Deep Bay and Victoria Harbour, the co-occurrence patterns revealed by network analysis indicated that the metals in sediment dominantly followed the natural geography process. Besides the wide applications in biology, sociology and informatics, it is the first time to apply network analysis in the researches of environment pollutions. This study demonstrated its powerful application for revealing the co-occurrence correlations among heavy metals in marine sediments, which could be further applied for other pollutants in various environment systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Camacho-Basallo, Paula; Yáñez-Vico, Rosa-María; Solano-Reina, Enrique; Iglesias-Linares, Alejandro
2017-03-01
The need for accurate techniques of estimating age has sharply increased in line with the rise in illegal migration and the political, economic and socio-demographic problems that this poses in developed countries today. The methods routinely employed for determining chronological age are mainly based on determining skeletal maturation using radiological techniques. The objective of this study was to correlate five different methods for assessing skeletal maturation. 606 radiographs of growing patients were analyzed, and each patient was classified according to two cervical vertebral-based methods, two hand-wrist-based methods and one tooth-based method. Spearman's rank-order correlation coefficient was applied to assess the relationship between chronological age and the five methods of assessing maturation, as well as correlations between the five methods (p < 0.05). Spearman's rank correlation coefficients for chronological age and cervical vertebral maturation stage using both methods were 0.656/0.693 (p < 0.001), respectively, for males. For females, the correlation was stronger for both methods. The correlation coefficients for chronological age against the two hand-wrist assessment methods were statistically significant only for Fishman's method, 0.722 (p < 0.001) and 0.839 (p < 0.001), respectively for males and females. The cervical vertebral, hand-wrist and dental maturation methods of assessment were all found to correlate strongly with each other, irrespective of gender, except for Grave and Brown's method. The results found the strongest correlation between the second molars and females, and the second premolar and males. This study sheds light on and correlates with the five radiographic methods most commonly used for assessing skeletal maturation in a Spanish population in southern Europe.
Flow-gated radial phase-contrast imaging in the presence of weak flow.
Peng, Hsu-Hsia; Huang, Teng-Yi; Wang, Fu-Nien; Chung, Hsiao-Wen
2013-01-01
To implement a flow-gating method to acquire phase-contrast (PC) images of carotid arteries without use of an electrocardiography (ECG) signal to synchronize the acquisition of imaging data with pulsatile arterial flow. The flow-gating method was realized through radial scanning and sophisticated post-processing methods including downsampling, complex difference, and correlation analysis to improve the evaluation of flow-gating times in radial phase-contrast scans. Quantitatively comparable results (R = 0.92-0.96, n = 9) of flow-related parameters, including mean velocity, mean flow rate, and flow volume, with conventional ECG-gated imaging demonstrated that the proposed method is highly feasible. The radial flow-gating PC imaging method is applicable in carotid arteries. The proposed flow-gating method can potentially avoid the setting up of ECG-related equipment for brain imaging. This technique has potential use in patients with arrhythmia or weak ECG signals.
Holman, Dawn M; Watson, Meg
2013-05-01
Exposure to ultraviolet radiation and a history of sunburn in childhood contribute to risk of skin cancer in adolescence and in adulthood, but many adolescents continue to seek a tan, either from the sun or from tanning beds (i.e., intentional tanning). To understand tanning behavior among adolescents, we conducted a systematic review of the literature to identify correlates of intentional tanning in the United States. We included articles on original research published in English between January 1, 2001, and October 31, 2011, that used self-reported data on intentional tanning by U.S. adolescents aged 8 to 18 years and examined potential correlates of tanning behaviors. Thirteen articles met our criteria; all used cross-sectional survey data and quantitative methods to assess correlates of intentional tanning. Results indicate that multiple factors influence tanning among adolescents. Individual factors that correlated with intentional tanning include demographic factors (female sex, older age), attitudes (preferring tanned skin), and behaviors (participating in other risky or appearance-focused behaviors such as dieting). Social factors correlated with intentional tanning include parental influence (having a parent who tans or permits tanning) and peer influence (having friends who tan). Only four studies examined broad contextual factors such as indoor tanning laws and geographic characteristics; they found that proximity to tanning facilities and geographic characteristics (living in the Midwest or South, living in a low ultraviolet area, and attending a rural high school) are associated with intentional tanning. These findings inform future public health research and intervention efforts to reduce intentional tanning. Published by Elsevier Inc.
Affective network and default mode network in depressive adolescents with disruptive behaviors
Kim, Sun Mi; Park, Sung Yong; Kim, Young In; Son, Young Don; Chung, Un-Sun; Min, Kyung Joon; Han, Doug Hyun
2016-01-01
Aim Disruptive behaviors are thought to affect the progress of major depressive disorder (MDD) in adolescents. In resting-state functional connectivity (RSFC) studies of MDD, the affective network (limbic network) and the default mode network (DMN) have garnered a great deal of interest. We aimed to investigate RSFC in a sample of treatment-naïve adolescents with MDD and disruptive behaviors. Methods Twenty-two adolescents with MDD and disruptive behaviors (disrup-MDD) and 20 age- and sex-matched healthy control (HC) participants underwent resting-state functional magnetic resonance imaging (fMRI). We used a seed-based correlation approach concerning two brain circuits including the affective network and the DMN, with two seed regions including the bilateral amygdala for the limbic network and the bilateral posterior cingulate cortex (PCC) for the DMN. We also observed a correlation between RSFC and severity of depressive symptoms and disruptive behaviors. Results The disrup-MDD participants showed lower RSFC from the amygdala to the orbitofrontal cortex and parahippocampal gyrus compared to HC participants. Depression scores in disrup-MDD participants were negatively correlated with RSFC from the amygdala to the right orbitofrontal cortex. The disrup-MDD participants had higher PCC RSFC compared to HC participants in a cluster that included the left precentral gyrus, left insula, and left parietal lobe. Disruptive behavior scores in disrup-MDD patients were positively correlated with RSFC from the PCC to the left insular cortex. Conclusion Depressive mood might be correlated with the affective network, and disruptive behavior might be correlated with the DMN in adolescent depression. PMID:26770059
Chiu, Yu-Jen; Liao, Wen-Chieh; Wang, Tien-Hsiang; Shih, Yu-Chung; Ma, Hsu; Lin, Chih-Hsun; Wu, Szu-Hsien; Perng, Cherng-Kang
2017-08-01
Despite significant advances in medical care and surgical techniques, pressure sore reconstruction is still prone to elevated rates of complication and recurrence. We conducted a retrospective study to investigate not only complication and recurrence rates following pressure sore reconstruction but also preoperative risk stratification. This study included 181 ulcers underwent flap operations between January 2002 and December 2013 were included in the study. We performed a multivariable logistic regression model, which offers a regression-based method accounting for the within-patient correlation of the success or failure of each flap. The overall complication and recurrence rates for all flaps were 46.4% and 16.0%, respectively, with a mean follow-up period of 55.4 ± 38.0 months. No statistically significant differences of complication and recurrence rates were observed among three different reconstruction methods. In subsequent analysis, albumin ≤3.0 g/dl and paraplegia were significantly associated with higher postoperative complication. The anatomic factor, ischial wound location, significantly trended toward the development of ulcer recurrence. In the fasciocutaneous group, paraplegia had significant correlation to higher complication and recurrence rates. In the musculocutaneous flap group, variables had no significant correlation to complication and recurrence rates. In the free-style perforator group, ischial wound location and malnourished status correlated with significantly higher complication rates; ischial wound location also correlated with significantly higher recurrence rate. Ultimately, our review of a noteworthy cohort with lengthy follow-up helped identify and confirm certain risk factors that can facilitate a more informed and thoughtful pre- and postoperative decision-making process for patients with pressure ulcers. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Correlation of SASH1 expression and ultrasonographic features in breast cancer
Gong, Xuchu; Wu, Jinna; Wu, Jian; Liu, Jun; Gu, Hailin; Shen, Hao
2017-01-01
Objective SASH1 is a member of the SH3/SAM adapter molecules family and has been identified as a new tumor suppressor and critical protein in signal transduction. An ectopic expression of SASH1 is associated with decreased cell viability of breast cancer. The aim of this study was to explore the association between SASH1 expression and the ultrasonographic features in breast cancer. Patients and methods A total of 186 patients diagnosed with breast cancer were included in this study. The patients received preoperative ultrasound examination, and the expression of SASH1 was determined using immunohistochemistry methods. Spearman’s rank correlation analysis was used to analyze the correlation between SASH1-positive expression and the ultrasonographic features. Results The positive expression of SASH1 was observed in 63 (33.9%) patients. The positive expression rate of SASH1 was significantly decreased in patients with breast cancer (63/186, 33.9%) compared with controls (P<0.001). The positive expression rate of SASH1 was significantly decreased in patients with edge burr sign (P=0.025), lymph node metastasis (P=0.007), and a blood flow grade of III (P=0.013) compared with patients without those adverse ultrasonographic features. The expression of SASH1 was negatively correlated with edge burr sign (P=0.025), lymph node metastasis (P=0.007), and blood flow grade (P=0.003) of the patients with breast cancer. Conclusion The expression of SASH1 was inversely correlated with some critical ultrasonographic features, including edge burr sign, lymph node metastasis, and blood flow grade in breast cancer, and decreased SASH1 expression appears to be associated with adverse clinical and imaging features in breast cancer. PMID:28138250
Breath-Group Intelligibility in Dysarthria: Characteristics and Underlying Correlates
ERIC Educational Resources Information Center
Yunusova, Yana; Weismer, Gary; Kent, Ray D.; Rusche, Nicole M.
2005-01-01
Purpose: This study was designed to determine whether within-speaker fluctuations in speech intelligibility occurred among speakers with dysarthria who produced a reading passage, and, if they did, whether selected linguistic and acoustic variables predicted the variations in speech intelligibility. Method: Participants with dysarthria included a…
Hompland, Tord; Ellingsen, Christine; Galappathi, Kanthi; Rofstad, Einar K
2014-01-01
Abstract Background. A high fraction of stroma in malignant tissues is associated with tumor progression, metastasis, and poor prognosis. Possible correlations between the stromal and physiologic microenvironments of tumors and the potential of dynamic contrast-enhanced (DCE) and diffusion-weighted (DW) magnetic resonance imaging (MRI) in quantification of the stromal microenvironment were investigated in this study. Material and methods. CK-160 cervical carcinoma xenografts were used as preclinical tumor model. A total of 43 tumors were included in the study, and of these tumors, 17 were used to search for correlations between the stromal and physiologic microenvironments, 11 were subjected to DCE-MRI, and 15 were subjected to DW-MRI. DCE-MRI and DW-MRI were carried out at 1.5 T with a clinical MR scanner and a slotted tube resonator transceiver coil constructed for mice. Fraction of connective tissue (CTFCol) and fraction of hypoxic tissue (HFPim) were determined by immunohistochemistry. A Millar SPC 320 catheter was used to measure tumor interstitial fluid pressure (IFP). Results. CTFCol showed a positive correlation to IFP and an inverse correlation to HFPim. The apparent diffusion coefficient assessed by DW-MRI was inversely correlated to CTFCol, whereas no correlation was found between DCE-MRI-derived parameters and CTFCol. Conclusion. DW-MRI is a potentially useful method for characterizing the stromal microenvironment of tumors.
Source localization of non-stationary acoustic data using time-frequency analysis
NASA Astrophysics Data System (ADS)
Stoughton, Jack; Edmonson, William
2005-04-01
An improvement in temporal locality of the generalized cross-correlation (GCC) for angle of arrival (AOA) estimation can be achieved by employing 2-D cross-correlation of infrasonic sensor data transformed to its time-frequency (TF) representation. Intermediate to the AOA evaluation is the time delay between pairs of sensors. The signal class of interest includes far field sources which are partially coherent across the array, nonstationary, and wideband. In addition, signals can occur as multiple short bursts, for which TF representations may be more appropriate for time delay estimation. The GCC tends to smooth out such temporal energy bursts. Simulation and experimental results will demonstrate the improvement in using a TF-based GCC, using the Cohen class, over the classic GCC method. Comparative demonstration of the methods will be performed on data captured on an infrasonic sensor array located at NASA Langley Research Center (LaRC). The infrasonic data sources include Delta IV and Space Shuttle launches from Kennedy Space Center which belong to the stated signal class. Of interest is to apply this method to the AOA estimation of atmospheric turbulence. [Work supported by NASA LaRC Creativity and Innovation project: Infrasonic Detection of Clear Air Turbulence and Severe Storms.
NASA Astrophysics Data System (ADS)
Brauer, Achim; Hajdas, Irka; Blockley, Simon P. E.; Bronk Ramsey, Christopher; Christl, Marcus; Ivy-Ochs, Susan; Moseley, Gina E.; Nowaczyk, Norbert N.; Rasmussen, Sune O.; Roberts, Helen M.; Spötl, Christoph; Staff, Richard A.; Svensson, Anders
2014-12-01
This paper provides a brief overview of the most common dating techniques applied in palaeoclimate and palaeoenvironmental studies including four radiometric and isotopic dating methods (radiocarbon, 230Th disequilibrium, luminescence, cosmogenic nuclides) and two incremental methods based on layer counting (ice layer, varves). For each method, concise background information about the fundamental principles and methodological approaches is provided. We concentrate on the time interval of focus for the INTIMATE (Integrating Ice core, MArine and TErrestrial records) community (60-8 ka). This dating guide addresses palaeoclimatologists who aim at interpretation of their often regional and local proxy time series in a wider spatial context and, therefore, have to rely on correlation with proxy records obtained from different archives from various regions. For this reason, we especially emphasise scientific approaches for harmonising chronologies for sophisticated and robust proxy data integration. In this respect, up-to-date age modelling techniques are presented as well as tools for linking records by age equivalence including tephrochronology, cosmogenic 10Be and palaeomagnetic variations. Finally, to avoid inadequate documentation of chronologies and assure reliable correlation of proxy time series, this paper provides recommendations for minimum standards of uncertainty and age datum reporting.
System and method to determine thermophysical properties of a multi-component gas
Morrow, Thomas B.; Behring, II, Kendricks A.
2003-08-05
A system and method to characterize natural gas hydrocarbons using a single inferential property, such as standard sound speed, when the concentrations of the diluent gases (e.g., carbon dioxide and nitrogen) are known. The system to determine a thermophysical property of a gas having a first plurality of components comprises a sound velocity measurement device, a concentration measurement device, and a processor to determine a thermophysical property as a function of a correlation between the thermophysical property, the speed of sound, and the concentration measurements, wherein the number of concentration measurements is less than the number of components in the gas. The method includes the steps of determining the speed of sound in the gas, determining a plurality of gas component concentrations in the gas, and determining the thermophysical property as a function of a correlation between the thermophysical property, the speed of sound, and the plurality of concentrations.
iPcc: a novel feature extraction method for accurate disease class discovery and prediction
Ren, Xianwen; Wang, Yong; Zhang, Xiang-Sun; Jin, Qi
2013-01-01
Gene expression profiling has gradually become a routine procedure for disease diagnosis and classification. In the past decade, many computational methods have been proposed, resulting in great improvements on various levels, including feature selection and algorithms for classification and clustering. In this study, we present iPcc, a novel method from the feature extraction perspective to further propel gene expression profiling technologies from bench to bedside. We define ‘correlation feature space’ for samples based on the gene expression profiles by iterative employment of Pearson’s correlation coefficient. Numerical experiments on both simulated and real gene expression data sets demonstrate that iPcc can greatly highlight the latent patterns underlying noisy gene expression data and thus greatly improve the robustness and accuracy of the algorithms currently available for disease diagnosis and classification based on gene expression profiles. PMID:23761440
Thermal quantum time-correlation functions from classical-like dynamics
NASA Astrophysics Data System (ADS)
Hele, Timothy J. H.
2017-07-01
Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.
NASA Astrophysics Data System (ADS)
Lei, Dong; Bai, Pengxiang; Zhu, Feipeng
2018-01-01
Nowadays, acetabulum prosthesis replacement is widely used in clinical medicine. However, there is no efficient way to evaluate the implantation effect of the prosthesis. Based on a modern photomechanics technique called digital image correlation (DIC), the evaluation method of the installation effect of the acetabulum was established during a prosthetic replacement of a hip joint. The DIC method determines strain field by comparing the speckle images between the undeformed sample and the deformed counterpart. Three groups of experiments were carried out to verify the feasibility of the DIC method on the acetabulum installation deformation test. Experimental results indicate that the installation deformation of acetabulum generally includes elastic deformation (corresponding to the principal strain of about 1.2%) and plastic deformation. When the installation angle is ideal, the plastic deformation can be effectively reduced, which could prolong the service life of acetabulum prostheses.
Blancas, R; Martínez-González, Ó; Ballesteros, D; Núñez, A; Luján, J; Rodríguez-Serrano, D; Hernández, A; Martínez-Díaz, C; Parra, C M; Matamala, B L; Alonso, M A; Chana, M
2018-02-07
To assess the correlation between left ventricular outflow tract velocity time integral (LVOT VTI) and stroke volume index (SVI) calculated by thermodilution methods in ventilated critically ill patients. A prospective, descriptive, multicenter study was performed. Five intensive care units from university hospitals. Patients older than 17 years needing mechanical ventilation and invasive hemodynamic monitoring were included. LVOT VTI was measured by pulsatile Doppler echocardiography. Calculations of SVI were performed through a floating pulmonary artery catheter (PAC) or a Pulse index Contour Cardiac Output (PiCCO ® ) thermodilution methods. The relation between LVOT VTI and SVI was tested by linear regression analysis. One hundred and fifty-six paired measurements were compared. Mean LVOT VTI was 20.83±4.86cm and mean SVI was 41.55±9.55mL/m 2 . Pearson correlation index for these variables was r=0.644, p<0.001; ICC was 0.52 (CI 95% 0.4-0.63). When maximum LVOT VTI was correlated with SVI, Pearson correlation index was r=0.62, p<0.001. Correlation worsened for extreme values, especially for those with higher LVOT VTI. LVOT VTI could be a complementary hemodynamic evaluation in selected patients, but does not eliminate the need for invasive monitoring at the present time. The weak correlation between LVOT VTI and invasive monitoring deserves additional assessment to identify the factors affecting this disagreement. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Macha, Madhulika; Lamba, Bharti; Muthineni, Sridhar; Margana, Pratap Gowd Jai Shankar; Chitoori, Prasad
2017-01-01
Introduction In the modern era, identification and determination of age is imperative for diversity of reasons that include disputed birth records, premature delivery, legal issues and for validation of birth certificate for school admissions, adoption, marriage, job and immigration. Several growth assessment parameters like bone age, dental age and the combination of both have been applied for different population with variable outcomes. It has been well documented that the chronological age does not necessarily correlate with the maturational status of a child. Hence, efforts were made to determine a child’s developmental age by using dental age (calcification of teeth) and skeletal age (skeletal maturation). Aim The present study was aimed to correlate the chronological age, dental age and skeletal age in children from Southeastern region of Andhra Pradesh, India. Materials and Methods Out of the total 900 screened children, only 100 subjects between age groups of 6-14 years with a mean age of 11.3±2.63 for males and 10.77±2.24 for females were selected for the study. Dental age was calculated by Demirjian method and skeletal age by modified Middle Phalanx of left hand third finger (MP3) method. Pearson’s and Spearman’s correlation tests were done to estimate the correlation between chronological, dental and skeletal ages among study population. Results There was a significant positive correlation between chronological age, dental age and all stages of MP3 among males. Similar results were observed in females, except for a non-significant moderate correlation between chronological age and dental age in the H stage of the MP3 region. Conclusion The results of the present study revealed correlation with statistical significance (p<0.05) between chronological, dental and skeletal ages among all the subjects (48 males and 52 females) and females attained maturity earlier than males in the present study population. PMID:29207822
A diagnostic for determining the quality of single-reference electron correlation methods
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Taylor, Peter R.
1989-01-01
It was recently proposed that the Euclidian norm of the t(sub 1) vector of the coupled cluster wave function (normalized by the number of electrons included in the correlation procedure) could be used to determine whether a single-reference-based electron correlation procedure is appopriate. This diagnostic, T(sub 1) is defined for use with self-consistent-field molecular orbitals and is invariant to the same orbital rotations as the coupled cluster energy. T(sub 1) is investigated for several different chemical systems which exhibit a range of multireference behavior, and is shown to be an excellent measure of the importance of non-dynamical electron correlation and is far superior to C(sub 0) from a singles and doubles configuration interaction wave function. It is further suggested that when the aim is to recover a large fraction of the dynamical electron correlation energy, a large T(sub 1) (i.e., greater than 0.02) probably indicates the need for a multireference electron correlation procedure.
A diagnostic for determining the quality of single-reference electron correlation methods
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Taylor, Peter R.
1989-01-01
It was recently proposed that the Euclidian norm of the t sub 1 vector of the coupled cluster wave function (normalized by the number of electrons included in the correlation procedure) could be used to determine whether a single-reference-based electron correlation procedure is appropriate. This diagnostic, T sub 1, is defined for use with self consistent field molecular orbitals and is invariant to the same orbital rotations as the coupled cluster energy. T sub 1 is investigated for several different chemical systems which exhibit a range of multireference behavior, and is shown to be an excellent measure of the importance of nondynamical electron correlation and is far superior to C sub 0 from a singles and doubles configuration interaction wave function. It is further suggested that when the aim is to recover a large fraction of the dynamical electron correlation energy, a large T sub 1 (i.e., greater than 0.02) probably indicates the need for a multireference electron correlation procedure.
The time-frequency method of signal analysis in internal combustion engine diagnostics
NASA Astrophysics Data System (ADS)
Avramchuk, V. S.; Kazmin, V. P.; Faerman, V. A.; Le, V. T.
2017-01-01
The paper presents the results of the study of applicability of time-frequency correlation functions to solving the problems of internal combustion engine fault diagnostics. The proposed methods are theoretically justified and experimentally tested. In particular, the method’s applicability is illustrated by the example of specially generated signals that simulate the vibration of an engine both during the normal operation and in the case of a malfunction in the system supplying fuel to the cylinders. This method was confirmed during an experiment with an automobile internal combustion engine. The study offers the main findings of the simulation and the experiment and highlights certain characteristic features of time-frequency autocorrelation functions that allow one to identify malfunctions in an engine’s cylinder. The possibility in principle of using time-frequency correlation functions in function testing of the internal combustion engine is demonstrated. The paper’s conclusion proposes further research directions including the application of the method to diagnosing automobile gearboxes.
Li, Dan; Jiang, Jia; Han, Dandan; Yu, Xinyu; Wang, Kun; Zang, Shuang; Lu, Dayong; Yu, Aimin; Zhang, Ziwei
2016-04-05
A new method is proposed for measuring the antioxidant capacity by electron spin resonance spectroscopy based on the loss of electron spin resonance signal after Cu(2+) is reduced to Cu(+) with antioxidant. Cu(+) was removed by precipitation in the presence of SCN(-). The remaining Cu(2+) was coordinated with diethyldithiocarbamate, extracted into n-butanol and determined by electron spin resonance spectrometry. Eight standards widely used in antioxidant capacity determination, including Trolox, ascorbic acid, ferulic acid, rutin, caffeic acid, quercetin, chlorogenic acid, and gallic acid were investigated. The standard curves for determining the eight standards were plotted, and results showed that the linear regression correlation coefficients were all high enough (r > 0.99). Trolox equivalent antioxidant capacity values for the antioxidant standards were calculated, and a good correlation (r > 0.94) between the values obtained by the present method and cupric reducing antioxidant capacity method was observed. The present method was applied to the analysis of real fruit samples and the evaluation of the antioxidant capacity of these fruits.
Popović, Boris M; Stajner, Dubravka; Slavko, Kevrešan; Sandra, Bijelić
2012-09-15
Ethanol extracts (80% in water) of 10 cornelian cherry (Cornus mas L.) genotypes were studied for antioxidant properties, using methods including DPPH(), ()NO, O(2)(-) and ()OH antiradical powers, FRAP, total phenolic and anthocyanin content (TPC and ACC) and also one relatively new, permanganate method (permanganate reducing antioxidant capacity-PRAC). Lipid peroxidation (LP) was also determined as an indicator of oxidative stress. The data from different procedures were compared and analysed by multivariate techniques (correlation matrix calculation and principal component analysis (PCA)). Significant positive correlations were obtained between TPC, ACC and DPPH(), ()NO, O(2)(-), and ()OH antiradical powers, and also between PRAC and TPC, ACC and FRAP. PCA found two major clusters of cornelian cherry, based on antiradical power, FRAP and PRAC and also on chemical composition. Chemometric evaluation showed close interdependence between PRAC method and FRAP and ACC. There was a huge variation between C. mas genotypes in terms of antioxidant activity. Copyright © 2012 Elsevier Ltd. All rights reserved.
Quantum imaging with incoherently scattered light from a free-electron laser
NASA Astrophysics Data System (ADS)
Schneider, Raimund; Mehringer, Thomas; Mercurio, Giuseppe; Wenthaus, Lukas; Classen, Anton; Brenner, Günter; Gorobtsov, Oleg; Benz, Adrian; Bhatti, Daniel; Bocklage, Lars; Fischer, Birgit; Lazarev, Sergey; Obukhov, Yuri; Schlage, Kai; Skopintsev, Petr; Wagner, Jochen; Waldmann, Felix; Willing, Svenja; Zaluzhnyy, Ivan; Wurth, Wilfried; Vartanyants, Ivan A.; Röhlsberger, Ralf; von Zanthier, Joachim
2018-02-01
The advent of accelerator-driven free-electron lasers (FEL) has opened new avenues for high-resolution structure determination via diffraction methods that go far beyond conventional X-ray crystallography methods. These techniques rely on coherent scattering processes that require the maintenance of first-order coherence of the radiation field throughout the imaging procedure. Here we show that higher-order degrees of coherence, displayed in the intensity correlations of incoherently scattered X-rays from an FEL, can be used to image two-dimensional objects with a spatial resolution close to or even below the Abbe limit. This constitutes a new approach towards structure determination based on incoherent processes, including fluorescence emission or wavefront distortions, generally considered detrimental for imaging applications. Our method is an extension of the landmark intensity correlation measurements of Hanbury Brown and Twiss to higher than second order, paving the way towards determination of structure and dynamics of matter in regimes where coherent imaging methods have intrinsic limitations.
Li, Yuelin; Root, James C; Atkinson, Thomas M; Ahles, Tim A
2016-06-01
Patient-reported cognition generally exhibits poor concordance with objectively assessed cognitive performance. In this article, we introduce latent regression Rasch modeling and provide a step-by-step tutorial for applying Rasch methods as an alternative to traditional correlation to better clarify the relationship of self-report and objective cognitive performance. An example analysis using these methods is also included. Introduction to latent regression Rasch modeling is provided together with a tutorial on implementing it using the JAGS programming language for the Bayesian posterior parameter estimates. In an example analysis, data from a longitudinal neurocognitive outcomes study of 132 breast cancer patients and 45 non-cancer matched controls that included self-report and objective performance measures pre- and post-treatment were analyzed using both conventional and latent regression Rasch model approaches. Consistent with previous research, conventional analysis and correlations between neurocognitive decline and self-reported problems were generally near zero. In contrast, application of latent regression Rasch modeling found statistically reliable associations between objective attention and processing speed measures with self-reported Attention and Memory scores. Latent regression Rasch modeling, together with correlation of specific self-reported cognitive domains with neurocognitive measures, helps to clarify the relationship of self-report with objective performance. While the majority of patients attribute their cognitive difficulties to memory decline, the Rash modeling suggests the importance of processing speed and initial learning. To encourage the use of this method, a step-by-step guide and programming language for implementation is provided. Implications of this method in cognitive outcomes research are discussed. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ulloa, Alvaro; Jingyu Liu; Vergara, Victor; Jiayu Chen; Calhoun, Vince; Pattichis, Marios
2014-01-01
In the biomedical field, current technology allows for the collection of multiple data modalities from the same subject. In consequence, there is an increasing interest for methods to analyze multi-modal data sets. Methods based on independent component analysis have proven to be effective in jointly analyzing multiple modalities, including brain imaging and genetic data. This paper describes a new algorithm, three-way parallel independent component analysis (3pICA), for jointly identifying genomic loci associated with brain function and structure. The proposed algorithm relies on the use of multi-objective optimization methods to identify correlations among the modalities and maximally independent sources within modality. We test the robustness of the proposed approach by varying the effect size, cross-modality correlation, noise level, and dimensionality of the data. Simulation results suggest that 3p-ICA is robust to data with SNR levels from 0 to 10 dB and effect-sizes from 0 to 3, while presenting its best performance with high cross-modality correlations, and more than one subject per 1,000 variables. In an experimental study with 112 human subjects, the method identified links between a genetic component (pointing to brain function and mental disorder associated genes, including PPP3CC, KCNQ5, and CYP7B1), a functional component related to signal decreases in the default mode network during the task, and a brain structure component indicating increases of gray matter in brain regions of the default mode region. Although such findings need further replication, the simulation and in-vivo results validate the three-way parallel ICA algorithm presented here as a useful tool in biomedical data decomposition applications.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Trifasciatosides A-J, Steroidal Saponins from Sansevieria trifasciata.
Teponno, Rémy Bertrand; Tanaka, Chiaki; Jie, Bai; Tapondjou, Léon Azefack; Miyamoto, Tomofumi
2016-01-01
Four previously unreported steroidal saponins, trifasciatosides A-D (1-4), three pairs of previously undescribed steroidal saponins, trifasciatosides E-J (5a, b-7a, b) including acetylated ones, together with twelve known compounds were isolated from the n-butanol soluble fraction of the methanol extract of Sansevieria trifasciata. Their structures were elucidated on the basis of detailed spectroscopic analysis, including (1)H-NMR, (13)C-NMR, (1)H-(1)H correlated spectroscopy (COSY), heteronuclear single quantum coherence (HSQC), heteronuclear multiple bond connectivity (HMBC), total correlated spectroscopy (TOCSY), nuclear Overhauser enhancement and exchange spectroscopy (NOESY), electrospray ionization-time of flight (ESI-TOF)-MS and chemical methods. Compounds 2, 4, and 7a, b exhibited moderate antiproliferative activity against HeLa cells.
Beliefs, Experience, and Interest in Pharmacotherapy among Smokers with HIV
McQueen, Amy; Shacham, Enbal; Sumner, Walton; Overton, E. Turner
2014-01-01
Objectives To examine beliefs, prior use, and interest in using pharmacotherapy among people living with HIV/AIDS (PLWHA). Methods Cross-sectional survey of smokers in a midwestern HIV clinic. Results The sample (N = 146) included 69% men, 82% African American, 45% were in precontemplation for quitting, and 46% were interested in using pharmacotherapy. Primary reasons for non-use included cost and a belief that they would be able to quit on their own. Physician’s assistance was the strongest correlate of prior use. Perceived benefits and self-efficacy were the strongest correlates of willingness to use pharmacotherapy. Conclusions Future interventions should address misconceptions, perceived benefits, and self-efficacy for using cessation aids. Physicians should offer pharmacotherapy to all smokers. PMID:24629557
Zhang, Chengfei; Wang, Tieshan; Guo, Xuan; Wu, Lili; Qin, Lingling; Liu, Tonghua
2017-01-01
Objective This study presents a systematic meta-analysis of the correlation between Helicobacter pylori (H. pylori) infection and autoimmune thyroid diseases (AITD). Materials and Methods Fifteen articles including 3,046 cases were selected (1,716 observational and 1,330 control cases). These data were analyzed using Stata12.0 meta-analysis software. Results H. pylori infection was positively correlated with the occurrence of AITD (OR = 2.25, 95% CI: 1.72–2.93). Infection with H. pylori strains positive for the cytotoxin-associated gene A (CagA) were positively correlated with AITD (OR = 1.99, 95% CI: 1.07–3.70). There was no significant difference between infections detected using enzyme-linked immunosorbent assay (ELISA) and other methods (χ2 = 2.151, p = 0.143). Patients with Grave’s disease (GD) and Hashimoto’s thyroiditis (HT) were more susceptible to H. pylori infection (GD: OR = 2.78, 95% CI: 1.68–4.61; HT: OR = 2.16, 95% CI: 1.44–3.23), while the rate of H. pylori infection did not differ between GD and HT (χ2 = 3.113, p = 0.078). Conclusions H. pylori infection correlated with GD and HT, and the eradication of H. pylori infection could reduce thyroid autoantibodies. PMID:29383192
Petascale Many Body Methods for Complex Correlated Systems
NASA Astrophysics Data System (ADS)
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
On the Feynman-Hellmann theorem in quantum field theory and the calculation of matrix elements
Bouchard, Chris; Chang, Chia Cheng; Kurth, Thorsten; ...
2017-07-12
In this paper, the Feynman-Hellmann theorem can be derived from the long Euclidean-time limit of correlation functions determined with functional derivatives of the partition function. Using this insight, we fully develop an improved method for computing matrix elements of external currents utilizing only two-point correlation functions. Our method applies to matrix elements of any external bilinear current, including nonzero momentum transfer, flavor-changing, and two or more current insertion matrix elements. The ability to identify and control all the systematic uncertainties in the analysis of the correlation functions stems from the unique time dependence of the ground-state matrix elements and the fact that all excited states and contact terms are Euclidean-time dependent. We demonstrate the utility of our method with a calculation of the nucleon axial charge using gradient-flowed domain-wall valence quarks on themore » $$N_f=2+1+1$$ MILC highly improved staggered quark ensemble with lattice spacing and pion mass of approximately 0.15 fm and 310 MeV respectively. We show full control over excited-state systematics with the new method and obtain a value of $$g_A = 1.213(26)$$ with a quark-mass-dependent renormalization coefficient.« less
Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values
Alves, Gelio; Yu, Yi-Kuo
2014-01-01
Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491
[Bioinorganic chemical composition of the lens and methods of its investigation].
Avetisov, S E; Novikov, I A; Pakhomova, N A; Motalov, V G
2018-01-01
Bioinorganic chemical composition of the lens of human and experimental animals (cows, dogs, rats, rabbits) have been analyzed in various studies. In most cases, the studies employed different methods to determine the gross (total) composition of chemical elements and their concentrations in the examined samples. Less frequently, they included an assessment of the distribution of chemical elements in the lens and correlation of their concentration with its morphological changes. Chemical elements from all groups (series) of the periodic classification system were discovered in the lens substance. Despite similar investigation methods, different authors obtained contradicting results on the chemical composition of the lens. This article presents data suggesting possible correlation between inorganic chemical elements in the lens substance with the development and formation of lenticular opacities. All currently employed methods are known to only analyze limited number of select chemical elements in the tissues and do not consider the whole range of elements that can be analyzed with existing technology; furthermore, the majority of studies are conducted on the animal model lens. Therefore, it is feasible to continue the development of the chemical microanalysis method by increasing the sensitivity of Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM/EDS) with the purpose of assessing the gross chemical composition and distribution of the elements in the lens substance, as well as revealing possible correlation between element concentration and morphological changes in the lens.
Kiani, Zahra; Simbar, Masuomeh; Dolatian, Mahrokh; Zayeri, Farid
2016-01-01
Background and Objectives: Women empowerment is one of millennium development goals which is effective on fertility, population’s stability and wellbeing. The influence of social determinants of health (SDH) on women empowerment is documented, however the correlation between SDH and women’s empowerment in fertility has not been figured out yet. This study was conducted to assess correlation between social determinants of health and women’s empowerment in reproductive decisions. Material and Methods: This was a descriptive-correlation study on 400 women who attended health centers affiliated to Shahid Beheshti University of Medical Sciences Tehran-Iran. Four hundred women were recruited using multistage cluster sampling method. The tools for data collection were 6 questionnaires including; 1) socio-demographic characteristics 2) women’s empowerment in reproductive decision-making, 3) perceived social support, 4) self-esteem, 5) marital satisfaction, 6) access to health services. Data were analyzed by SPSS-17 and using Pearson and Spearman correlation tests. Results: Results showed 82.54 ± 14.00 (Mean±SD) of total score 152 of women’s empowerment in reproductive decision making. All structural and intermediate variables were correlated with women’s empowerment in reproductive decisions. The highest correlations were demonstrated between education (among structural determinants; r= 0.44, P< 0.001), and Self-esteem (among intermediate determinants; r= 0.34, P< 0.001) with women’s empowerment in fertility decision making. Conclusion: Social determinants of health have a significant correlation with women’s empowerment in reproductive decision-making. PMID:27157184
Feasibility study of parallel optical correlation-decoding analysis of lightning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Descour, M.R.; Sweatt, W.C.; Elliott, G.R.
The optical correlator described in this report is intended to serve as an attention-focusing processor. The objective is to narrowly bracket the range of a parameter value that characterizes the correlator input. The input is a waveform collected by a satellite-borne receiver. In the correlator, this waveform is simultaneously correlated with an ensemble of ionosphere impulse-response functions, each corresponding to a different total-electron-count (TEC) value. We have found that correlation is an effective method of bracketing the range of TEC values likely to be represented by the input waveform. High accuracy in a computational sense is not required of themore » correlator. Binarization of the impulse-response functions and the input waveforms prior to correlation results in a lower correlation-peak-to-background-fluctuation (signal-to-noise) ratio than the peak that is obtained when all waveforms retain their grayscale values. The results presented in this report were obtained by means of an acousto-optic correlator previously developed at SNL as well as by simulation. An optical-processor architecture optimized for 1D correlation of long waveforms characteristic of this application is described. Discussions of correlator components, such as optics, acousto-optic cells, digital micromirror devices, laser diodes, and VCSELs are included.« less
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Li, Shaohong L.; Truhlar, Donald G.
2017-02-01
Analytic potential energy surfaces (PESs) and state couplings of the ground and two lowest singlet excited states of thioanisole (C6H5SCH3) are constructed in a diabatic representation based on electronic structure calculations including dynamic correlation. They cover all 42 internal degrees of freedom and a wide range of geometries including the Franck-Condon region and the reaction valley along the breaking S-CH3 bond with the full ranges of the torsion angles. The parameters in the PESs and couplings are fitted to the results of smooth diabatic electronic structure calculations including dynamic electron correlation by the extended multi-configurational quasi-degenerate perturbation theory method for the adiabatic state energies followed by diabatization by the fourfold way. The fit is accomplished by the anchor points reactive potential method with two reactive coordinates and 40 nonreactive degrees of freedom, where the anchor-point force fields are obtained with a locally modified version of the QuickFF package. The PESs and couplings are suitable for study of the topography of the trilayer potential energy landscape and for electronically nonadiabatic molecular dynamics simulations of the photodissociation of the S-CH3 bond.
Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J
2011-01-01
Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.
Brambila, Danilo S; Harvey, Alex G; Houfek, Karel; Mašín, Zdeněk; Smirnova, Olga
2017-08-02
We present the first ab initio multi-channel photoionization calculations for NO 2 in the vicinity of the 2 A 1 / 2 B 2 conical intersection, for a range of nuclear geometries, using our newly developed set of tools based on the ab initio multichannel R-matrix method. Electronic correlation is included in both the neutral and the scattering states of the molecule via configuration interaction. Configuration mixing is especially important around conical intersections and avoided crossings, both pertinent for NO 2 , and manifests itself via significant variations in photoelectron angular distributions. The method allows for a balanced and accurate description of the photoionization/photorecombination for a number of different ionic channels in a wide range of photoelectron energies up to 100 eV. Proper account of electron correlations is crucial for interpreting time-resolved signals in photoelectron spectroscopy and high harmonic generation (HHG) from polyatomic molecules.
Chang, Hong; Wang, Xiaojuan; Yang, Xin; Song, Haiqing; Qiao, Yuchen; Liu, Jia
2017-02-01
Objective Intravenous thrombolysis with recombinant tissue plasminogen activator (rt-PA) is considered the most effective treatment method for AIS; however, it is associated with a risk of hemorrhage. We analyzed the risk factors for digestive and urologic hemorrhage during rt-PA therapy. Methods We retrospectively analyzed patients with AIS who underwent intravenous thrombolysis with rt-PA during a 5-year period in a Chinese stroke center. Data on the demographics, medical history, laboratory test results, and clinical outcomes were collected. Results 338 patients with AIS were eligible and included. Logistic regression multivariate analysis showed that gastric catheter was significantly correlated with digestive hemorrhage, while age and urinary catheter were significantly correlated with urologic hemorrhage. Most hemorrhagic events were associated with catheterization after 1 to 24 hours of rt-PA therapy. Conclusions In summary, gastric and urinary catheters were correlated with digestive and urologic hemorrhage in patients with AIS undergoing rt-PA therapy. Well-designed controlled studies with large samples are required to confirm our findings.
Haze variation in valley region its affecting factors
NASA Astrophysics Data System (ADS)
Liu, Yinge; Zhao, Aling; Wang, Yan; Wang, Shaoxiong; Dang, Caoni
2018-02-01
The haze has a great harm on the environment and human health. Based on the daily meteorological observation data including visibility, relative humidity, wind speed, temperature, air pollution index and weather record of Baoji region in China, and using least squares method, wavelet and correlation analysis method, the temporal and spatial characteristics of haze were analyzed. While the factors affecting the haze change were discussed. The results showed that the haze mainly occurs in plain areas, and in hilly areas and mountain the haze frequency is relatively small. Overall the annual average haze is decreasing, especially in winter and spring the reduction trend of haze is most obvious, however, in summer haze is increasing. The haze has a 5-year short period and 10-year and 15-year long-term cycles change. Moreover, there was a significant negative correlation between temperature and wind speed with haze, while the relative humidity was significantly positively correlated with haze. These studies provide the basis for atmospheric environmental monitor and management.
Node synchronization schemes for the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Swanson, L.; Arnold, S.
1992-01-01
The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.
Novel developments and applications of two-dimensional correlation spectroscopy
NASA Astrophysics Data System (ADS)
Park, Yeonju; Noda, Isao; Jung, Young Mee
2016-11-01
A comprehensive survey review of new and noteworthy developments of 2D correlation spectroscopy (2DCOS) and its applications for the last two years is compiled. This review covers not only journal articles and book chapters but also books, proceedings, and review articles published on 2DCOS, numerous significant new concepts of 2DCOS, patents and publication trends. Noteworthy experimental practices in the field of 2DCOS, including types of analytical probes employed, various perturbation methods used in experiments, and pertinent examples of fundamental and practical applications, are also reviewed.
Optical coherence tomography for the quantitative study of cerebrovascular physiology
Srinivasan, Vivek J; Atochin, Dmitriy N; Radhakrishnan, Harsha; Jiang, James Y; Ruvinskaya, Svetlana; Wu, Weicheng; Barry, Scott; Cable, Alex E; Ayata, Cenk; Huang, Paul L; Boas, David A
2011-01-01
Doppler optical coherence tomography (DOCT) and OCT angiography are novel methods to investigate cerebrovascular physiology. In the rodent cortex, DOCT flow displays features characteristic of cerebral blood flow, including conservation along nonbranching vascular segments and at branch points. Moreover, DOCT flow values correlate with hydrogen clearance flow values when both are measured simultaneously. These data validate DOCT as a noninvasive quantitative method to measure tissue perfusion over a physiologic range. PMID:21364599
38 CFR 1.17 - Evaluation of studies relating to health effects of radiation exposure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... health effects of radiation exposure. (a) From time to time, the Secretary shall publish evaluations of... paragraph a valid study is one which: (i) Has adequately described the study design and methods of data... studies affecting epidemiological assessments including case series, correlational studies and studies...
38 CFR 1.17 - Evaluation of studies relating to health effects of radiation exposure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... health effects of radiation exposure. (a) From time to time, the Secretary shall publish evaluations of... paragraph a valid study is one which: (i) Has adequately described the study design and methods of data... studies affecting epidemiological assessments including case series, correlational studies and studies...
38 CFR 1.17 - Evaluation of studies relating to health effects of radiation exposure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... health effects of radiation exposure. (a) From time to time, the Secretary shall publish evaluations of... paragraph a valid study is one which: (i) Has adequately described the study design and methods of data... studies affecting epidemiological assessments including case series, correlational studies and studies...
38 CFR 1.17 - Evaluation of studies relating to health effects of radiation exposure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... health effects of radiation exposure. (a) From time to time, the Secretary shall publish evaluations of... paragraph a valid study is one which: (i) Has adequately described the study design and methods of data... studies affecting epidemiological assessments including case series, correlational studies and studies...
38 CFR 1.17 - Evaluation of studies relating to health effects of radiation exposure.
Code of Federal Regulations, 2011 CFR
2011-07-01
... health effects of radiation exposure. (a) From time to time, the Secretary shall publish evaluations of... paragraph a valid study is one which: (i) Has adequately described the study design and methods of data... studies affecting epidemiological assessments including case series, correlational studies and studies...
Gambling as an Emerging Health Problem on Campus
ERIC Educational Resources Information Center
Stuhldreher, Wendy L.; Stuhldreher, Thomas J.; Forrest, Kimberly Y-Z
2007-01-01
Objective: The authors documented the prevalence of gambling and correlates to health among undergraduates. Methods: The authors analyzed data from a health-habit questionnaire (gambling questions included) given to students enrolled in a university-required course. Results: Gambling and problems with gambling were more frequent among men than…
Long-Term Outcome in Pyridoxine-Dependent Epilepsy
ERIC Educational Resources Information Center
Bok, Levinus A.; Halbertsma, Feico J..; Houterman, Saskia; Wevers, Ron A.; Vreeswijk, Charlotte; Jakobs, Cornelis; Struys, Eduard; van der Hoeven, Johan H.; Sival, Deborah A.; Willemsen, Michel A.
2012-01-01
Aim: The long-term outcome of the Dutch pyridoxine-dependent epilepsy cohort and correlations between patient characteristics and follow-up data were retrospectively studied. Method: Fourteen patients recruited from a national reference laboratory were included (four males, 10 females, from 11 families; median age at assessment 6y; range 2y…
Reaching More Students through Thinking in Physics
ERIC Educational Resources Information Center
Coletta, Vincent P.
2017-01-01
Thinking in Physics (TIP) is a new curriculum that is more effective than commonly used interactive engagement methods for students who have the greatest difficulty learning physics. Research has shown a correlation between learning in physics and other factors, including scientific reasoning ability. The TIP curriculum addresses those factors.…
Effect of rotational alignment on outcome of total knee arthroplasty
Breugem, Stefan J; van den Bekerom, Michel PJ; Tuinebreijer, Willem E; van Geenen, Rutger C I
2015-01-01
Background and purpose Poor outcomes have been linked to errors in rotational alignment of total knee arthroplasty components. The aims of this study were to determine the correlation between rotational alignment and outcome, to review the success of revision for malrotated total knee arthroplasty, and to determine whether evidence-based guidelines for malrotated total knee arthroplasty can be proposed. Patients and methods We conducted a systematic review including all studies reporting on both rotational alignment and functional outcome. Comparable studies were used in a correlation analysis and results of revision were analyzed separately. Results 846 studies were identified, 25 of which met the inclusion criteria. From this selection, 11 studies could be included in the correlation analysis. A medium positive correlation (ρ = 0.44, 95% CI: 0.27–0.59) and a large positive correlation (ρ = 0.68, 95% CI: 0.64–0.73) were found between external rotation of the tibial component and the femoral component, respectively, and the Knee Society score. Revision for malrotation gave positive results in all 6 studies in this field. Interpretation Medium and large positive correlations were found between tibial and femoral component rotational alignment on the one hand and better functional outcome on the other. Revision of malrotated total knee arthroplasty may be successful. However, a clear cutoff point for revision for malrotated total knee arthroplasty components could not be identified. PMID:25708694
Nonlinear analysis of structures. [within framework of finite element method
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.
1974-01-01
The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.
Application of Fourier transforms for microwave radiometric inversions
NASA Technical Reports Server (NTRS)
Holmes, J. J.; Balanis, C. A.; Truman, W. M.
1975-01-01
Existing microwave radiometer technology now provides a suitable method for remote determination of the ocean surface's absolute brightness temperature. To extract the brightness temperature of the water from the antenna temperature, an unstable Fredholm integral equation of the first kind is solved. Fourier transform techniques are used to invert the integral after it is placed into a cross correlation form. Application and verification of the methods to a two-dimensional modeling of a laboratory wave tank system are included. The instability of the ill-posed Fredholm equation is examined and a restoration procedure is included which smooths the resulting oscillations. With the recent availability and advances of fast Fourier transform (FFT) techniques, the method presented becomes very attractive in the evaluation of large quantities of data.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1988-01-01
This thesis reviews the technique established to clear channels in the Power Spectral Estimate by applying linear combinations of well known window functions to the autocorrelation function. The need for windowing the auto correlation function is due to the fact that the true auto correlation is not generally used to obtain the Power Spectral Estimate. When applied, the windows serve to reduce the effect that modifies the auto correlation by truncating the data and possibly the autocorrelation has on the Power Spectral Estimate. It has been shown in previous work that a single channel has been cleared, allowing for the detection of a small peak in the presence of a large peak in the Power Spectral Estimate. The utility of this method is dependent on the robustness of it on different input situations. We extend the analysis in this paper, to include clearing up to three channels. We examine the relative positions of the spikes to each other and also the effect of taking different percentages of lags of the auto correlation in the Power Spectral Estimate. This method could have application wherever the Power Spectrum is used. An example of this is beam forming for source location, where a small target can be located next to a large target. Other possibilities extend into seismic data processing. As the method becomes more automated other applications may present themselves.
Meyer, Georg F; Spray, Amy; Fairlie, Jo E; Uomini, Natalie T
2014-01-01
Current neuroimaging techniques with high spatial resolution constrain participant motion so that many natural tasks cannot be carried out. The aim of this paper is to show how a time-locked correlation-analysis of cerebral blood flow velocity (CBFV) lateralization data, obtained with functional TransCranial Doppler (fTCD) ultrasound, can be used to infer cerebral activation patterns across tasks. In a first experiment we demonstrate that the proposed analysis method results in data that are comparable with the standard Lateralization Index (LI) for within-task comparisons of CBFV patterns, recorded during cued word generation (CWG) at two difficulty levels. In the main experiment we demonstrate that the proposed analysis method shows correlated blood-flow patterns for two different cognitive tasks that are known to draw on common brain areas, CWG, and Music Synthesis. We show that CBFV patterns for Music and CWG are correlated only for participants with prior musical training. CBFV patterns for tasks that draw on distinct brain areas, the Tower of London and CWG, are not correlated. The proposed methodology extends conventional fTCD analysis by including temporal information in the analysis of cerebral blood-flow patterns to provide a robust, non-invasive method to infer whether common brain areas are used in different cognitive tasks. It complements conventional high resolution imaging techniques.
Personality assessment in snow leopards (Uncia uncia).
Gartner, Marieke Cassia; Powell, David
2012-01-01
Knowledge of individual personality is a useful tool in animal husbandry and can be used effectively to improve welfare. This study assessed personality in snow leopards (Uncia uncia) by examining their reactions to six novel objects and comparing them to personality assessments based on a survey completed by zookeepers. The objectives were to determine whether these methods could detect differences in personality, including age and sex differences, and to assess whether the two methods yielded comparable results. Both keeper assessments and novel object tests identified age, sex, and individual differences in snow leopards. Five dimensions of personality were found based on keepers' ratings: Active/Vigilant, Curious/Playful, Calm/Self-Assured, Timid/Anxious, and Friendly to Humans. The dimension Active/Vigilant was significantly positively correlated with the number of visits to the object, time spent locomoting, and time spent in exploratory behaviors. Curious/Playful was significantly positively correlated with the number of visits to the object, time spent locomoting, and time spent in exploratory behaviors. However, other dimensions (Calm/Self-Assured, Friendly to Humans, and Timid/Anxious) did not correlate with novel-object test variables and possible explanations for this are discussed. Thus, some of the traits and behaviors were correlated between assessment methods, showing the novel-object test to be useful in assessing an animal's personality should a keeper be unable to, or to support a keeper's assessment. © 2011 Wiley Periodicals, Inc.
Kelly, Mary T; Blaise, Alain; Larroque, Michel
2010-11-19
This paper reports a new, simple, rapid and economical method for routine determination of 24 amino acids and biogenic amines in grapes and wine. No sample clean-up is required and total run time including column re-equilibration is less than 40min. Following automated in-loop automated pre-column derivatisation with an o-phthaldialdehyde, N-acetyl-l-cysteine reagent, compounds were separated on a 3mm×25cm C(18) column using a binary mobile phase. The method was validated in the range 0.25-10mg/l; repeatability was less than 3% RSD and the intermediate precision ranged from 2 to 7% RSD. The method was shown to be linear by the 'lack of fit' test and the accuracy was between 97 and 101%. The LLOQ varied between 10μg/l for aspartic and glutamic acids, ethanolamine and GABA, and 100μg/l for tyrosine, phenylalanine, putrescine and cadaverine. The method was applied to grapes, white wine, red wine, honey and three species of physalis fruit. Grapes and physalis fruit were crushed, sieved, centrifuged and diluted 1/20 and 1/100, respectively, for analysis; wines and honeys were simply diluted 10-fold. It was shown using this method that the amino acid content of grapes was strongly correlated with berry volume, moderately correlated with sugar concentration and inversely correlated with total acidity. Copyright © 2010 Elsevier B.V. All rights reserved.
Motamedzade, Majid; Ashuri, Mohammad Reza; Golmohammadi, Rostam; Mahjub, Hossein
2011-06-13
During the last decades, to assess the risk factors of work-related musculoskeletal disorders (WMSDs), enormous observational methods have been developed. Rapid Entire Body Assessment (REBA) and Quick Exposure Check (QEC) are two general methods in this field. This study aimed to compare ergonomic risk assessment outputs from QEC and REBA in terms of agreement in distribution of postural loading scores based on analysis of working postures. This cross-sectional study was conducted in an engine oil company in which 40 jobs were studied. All jobs were observed by a trained occupational health practitioner. Job information was collected to ensure the completion of ergonomic risk assessment tools, including QEC, and REBA. The result revealed that there was a significant correlation between final scores (r=0.731) and the action levels (r =0.893) of two applied methods. Comparison between the action levels and final scores of two methods showed that there was no significant difference among working departments. Most of studied postures acquired low and moderate risk level in QEC assessment (low risk=20%, moderate risk=50% and High risk=30%) and in REBA assessment (low risk=15%, moderate risk=60% and high risk=25%). There is a significant correlation between two methods. They have a strong correlation in identifying risky jobs, and determining the potential risk for incidence of WMSDs. Therefore, there is possibility for researchers to apply interchangeably both methods, for postural risk assessment in appropriate working environments.
Buckling analysis and test correlation of hat stiffened panels for hypersonic vehicles
NASA Technical Reports Server (NTRS)
Percy, Wendy C.; Fields, Roger A.
1990-01-01
The paper discusses the design, analysis, and test of hat stiffened panels subjected to a variety of thermal and mechanical load conditions. The panels were designed using data from structural optimization computer codes and finite element analysis. Test methods included the grid shadow moire method and a single gage force stiffness method. The agreement between the test data and analysis provides confidence in the methods that are currently being used to design structures for hypersonic vehicles. The agreement also indicates that post buckled strength may potentially be used to reduce the vehicle weight.
Method for Determining Optimum Injector Inlet Geometry
NASA Technical Reports Server (NTRS)
Myers, W. Neill (Inventor); Trinh, Huu P. (Inventor)
2015-01-01
A method for determining the optimum inlet geometry of a liquid rocket engine swirl injector includes obtaining a throttleable level phase value, volume flow rate, chamber pressure, liquid propellant density, inlet injector pressure, desired target spray angle and desired target optimum delta pressure value between an inlet and a chamber for a plurality of engine stages. The method calculates the tangential inlet area for each throttleable stage. The method also uses correlation between the tangential inlet areas and delta pressure values to calculate the spring displacement and variable inlet geometry of a liquid rocket engine swirl injector.
Ab initio method for calculating total cross sections
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Schneider, B. I.; Temkin, A.
1993-01-01
A method for calculating total cross sections without formally including nonelastic channels is presented. The idea is to use a one channel T-matrix variational principle with a complex correlation function. The derived T matrix is therefore not unitary. Elastic scattering is calculated from T-parallel-squared, but total scattering is derived from the imaginary part of T using the optical theorem. The method is applied to the spherically symmetric model of electron-hydrogen scattering. No spurious structure arises; results for sigma(el) and sigma(total) are in excellent agreement with calculations of Callaway and Oza (1984). The method has wide potential applicability.
NASA Astrophysics Data System (ADS)
Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.
2008-12-01
A global potential energy surface (PES) that includes short and long range terms has been determined for the NH3 molecule. The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations and the internally contracted averaged coupled-pair functional electronic structure methods have been used in conjunction with very large correlation-consistent basis sets, including diffuse functions. Extrapolation to the one-particle basis set limit was performed and core correlation and scalar relativistic contributions were included directly, while the diagonal Born-Oppenheimer correction was added. Our best purely ab initio PES, denoted "mixed," is constructed from two PESs which differ in whether the ic-ACPF higher-order correlation correction was added or not. Rovibrational transition energies computed from the mixed PES agree well with experiment and the best previous theoretical studies, but most importantly the quality does not deteriorate even up to 10300cm-1 above the zero-point energy (ZPE). The mixed PES was improved further by empirical refinement using the most reliable J =0-2 rovibrational transitions in the HITRAN 2004 database. Agreement between high-resolution experiment and rovibrational transition energies computed from our refined PES for J =0-6 is excellent. Indeed, the root mean square (rms) error for 13 HITRAN 2004 bands for J =0-2 is 0.023cm-1 and that for each band is always ⩽0.06cm-1. For J =3-5 the rms error is always ⩽0.15cm-1. This agreement means that transition energies computed with our refined PES should be useful in the assignment of new high-resolution NH3 spectra and in correcting mistakes in previous assignments. Ideas for further improvements to our refined PES and for extension to other isotopolog are discussed.
Band structures in coupled-cluster singles-and-doubles Green's function (GFCCSD)
NASA Astrophysics Data System (ADS)
Furukawa, Yoritaka; Kosugi, Taichi; Nishi, Hirofumi; Matsushita, Yu-ichiro
2018-05-01
We demonstrate that the coupled-cluster singles-and-doubles Green's function (GFCCSD) method is a powerful and prominent tool drawing the electronic band structures and the total energies, which many theoretical techniques struggle to reproduce. We have calculated single-electron energy spectra via the GFCCSD method for various kinds of systems, ranging from ionic to covalent and van der Waals, for the first time: the one-dimensional LiH chain, one-dimensional C chain, and one-dimensional Be chain. We have found that the bandgap becomes narrower than in HF due to the correlation effect. We also show that the band structures obtained from the GFCCSD method include both quasiparticle and satellite peaks successfully. Besides, taking one-dimensional LiH as an example, we discuss the validity of restricting the active space to suppress the computational cost of the GFCCSD method. We show that the calculated results without bands that do not contribute to the chemical bonds are in good agreement with full-band calculations. With the GFCCSD method, we can calculate the total energies and spectral functions for periodic systems in an explicitly correlated manner.
Radhakrishnan, Ravi; Yu, Hsiu-Yu; Eckmann, David M.; Ayyaswamy, Portonovo S.
2017-01-01
Traditionally, the numerical computation of particle motion in a fluid is resolved through computational fluid dynamics (CFD). However, resolving the motion of nanoparticles poses additional challenges due to the coupling between the Brownian and hydrodynamic forces. Here, we focus on the Brownian motion of a nanoparticle coupled to adhesive interactions and confining-wall-mediated hydrodynamic interactions. We discuss several techniques that are founded on the basis of combining CFD methods with the theory of nonequilibrium statistical mechanics in order to simultaneously conserve thermal equipartition and to show correct hydrodynamic correlations. These include the fluctuating hydrodynamics (FHD) method, the generalized Langevin method, the hybrid method, and the deterministic method. Through the examples discussed, we also show a top-down multiscale progression of temporal dynamics from the colloidal scales to the molecular scales, and the associated fluctuations, hydrodynamic correlations. While the motivation and the examples discussed here pertain to nanoscale fluid dynamics and mass transport, the methodologies presented are rather general and can be easily adopted to applications in convective heat transfer. PMID:28035168
3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock
NASA Astrophysics Data System (ADS)
Tang, X.; Rayudu, N. M.; Singh, G.
2017-12-01
Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator
Short-range second order screened exchange correction to RPA correlation energies
NASA Astrophysics Data System (ADS)
Beuerle, Matthias; Ochsenfeld, Christian
2017-11-01
Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.
Inference for High-dimensional Differential Correlation Matrices *
Cai, T. Tony; Zhang, Anru
2015-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380
Understanding Zeeman EIT Noise Correlation Spectra in Buffered Rb Vapor
NASA Astrophysics Data System (ADS)
O'Leary, Shannon; Zheng, Aojie; Crescimanno, Michael
2014-05-01
Noise correlation spectroscopy on systems manifesting Electromagnetically Induced Transparency (EIT) holds promise as a simple, robust method for performing high-resolution spectroscopy used in applications such as EIT-based atomic magnetometry and clocks. During laser light's propagation through a resonant medium, interaction with the medium converts laser phase noise into intensity noise. While this noise conversion can diminish the precision of EIT applications, noise correlation techniques transform the noise into a useful spectroscopic tool that can improve the application's precision. Using a single diode laser with large phase noise, we examine laser intensity noise and noise correlations from Zeeman EIT in a buffered Rb vapor. Of particular interest is a narrow noise correlation feature, resonant with EIT, that has been shown in earlier work to be power-broadening resistant at low powers. We report here on our recent experimental work and complementary theoretical modeling on EIT noise spectra, including a study of power broadening of the narrow noise correlation feature. Understanding the nature of the noise correlation spectrum is essential for optimizing EIT-noise applications.
Short-range second order screened exchange correction to RPA correlation energies.
Beuerle, Matthias; Ochsenfeld, Christian
2017-11-28
Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.
Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics
NASA Astrophysics Data System (ADS)
Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.
2018-02-01
Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.
Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics.
Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L
2018-02-07
Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette
2014-08-15
The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Noar, Seth M; Mehrotra, Purnima
2011-03-01
Traditional theory testing commonly applies cross-sectional (and occasionally longitudinal) survey research to test health behavior theory. Since such correlational research cannot demonstrate causality, a number of researchers have called for the increased use of experimental methods for theory testing. We introduce the multi-methodological theory-testing (MMTT) framework for testing health behavior theory. The MMTT framework introduces a set of principles that broaden the perspective of how we view evidence for health behavior theory. It suggests that while correlational survey research designs represent one method of testing theory, the weaknesses of this approach demand that complementary approaches be applied. Such approaches include randomized lab and field experiments, mediation analysis of theory-based interventions, and meta-analysis. These alternative approaches to theory testing can demonstrate causality in a much more robust way than is possible with correlational survey research methods. Such approaches should thus be increasingly applied in order to more completely and rigorously test health behavior theory. Greater application of research derived from the MMTT may lead researchers to refine and modify theory and ultimately make theory more valuable to practitioners. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Impact of Self-concept on Preschoolers’ Dental Anxiety and Behavior
Erfanparast, Leila; Vafaei, Ali; Sohrabi, Azin; Ranjkesh, Bahram; Bahadori, Zahra; Pourkazemi, Maryam; Dadashi, Shabnam; Shirazi, Sajjad
2015-01-01
Background and aims. Different factors affect children’s behavior during dental treatment, including psychological and behavioral characteristics. The aim of this study was to evaluate the correlation of self-concept on child’s anxiety and behavior during dental treatment in 4 to 6-year-old children. Materials and methods. A total of 235 preschoolers aged 4 to 6 years were included in this descriptive analytic study. Total self-concept score for each child was assessed according to Primary Self-concept Scale before dental treatment. Child’s anxiety and child’s behavior were assessed, during the restoration of mandibular primary molar, using clinical anxiety rating scale and Frankl Scale, respectively. Spearman’s correlation coefficient was used to evaluate the correlation between the total self-concept score with the results of clinical anxiety rating scale and Frankl Scale. Results. There was a moderate inverse correlation between the self-concept and clinical anxiety rating scale scores (r = -0.545, P < 0.001), and a moderate correlation between the self-concept and child’s behavior scores (r = 0.491, P < 0.001). A strong inverse relation was also found between the anxiety and behavior scores (r = -0.91, P < 0.001). Conclusion. Children with higher self-concept had lower anxiety level and better behavioral feedback during dental treatment. PMID:26697152
King, Andrew W; Baskerville, Adam L; Cox, Hazel
2018-03-13
An implementation of the Hartree-Fock (HF) method using a Laguerre-based wave function is described and used to accurately study the ground state of two-electron atoms in the fixed nucleus approximation, and by comparison with fully correlated (FC) energies, used to determine accurate electron correlation energies. A variational parameter A is included in the wave function and is shown to rapidly increase the convergence of the energy. The one-electron integrals are solved by series solution and an analytical form is found for the two-electron integrals. This methodology is used to produce accurate wave functions, energies and expectation values for the helium isoelectronic sequence, including at low nuclear charge just prior to electron detachment. Additionally, the critical nuclear charge for binding two electrons within the HF approach is calculated and determined to be Z HF C =1.031 177 528.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
Wagner, Glenn J.; Goggin, Kathy; Mindry, Deborah; Beyeza-Kashesya, Jolly; Finocchario-Kessler, Sarah; Woldetsadik, Mahlet Atakilt; Khanakwa, Sarah; Wanyenze, Rhoda K.
2014-01-01
We examined the correlates of use of safer conception methods (SCM) in a sample of 400 Ugandan HIV clients (75% female; 61% on antiretroviral therapy; 61% with HIV-negative or unknown status partners) in heterosexual relationships with fertility intentions. SCM assessed included timed unprotected intercourse, manual self-insemination, sperm washing, and pre-exposure prophylaxis (PrEP). In the 6 months prior to baseline, 47 (12%) reported using timed unprotected intercourse to reduce risk of HIV infection (or re-infection), none had used manual self-insemination or sperm washing, and 2 had used PrEP. In multiple regression analysis, correlates of use of timed unprotected intercourse included greater perceptions of partner’s willingness to use SCM and providers’ stigma of childbearing among people living with HIV, higher SCM knowledge, and desire for a child within the next 6 months. These findings highlight the need for policy and provider training regarding integration of couples’ safer conception counselling into HIV care. PMID:25280448
Zhao, W; Busto, R; Truettner, J; Ginsberg, M D
2001-07-30
The analysis of pixel-based relationships between local cerebral blood flow (LCBF) and mRNA expression can reveal important insights into brain function. Traditionally, LCBF and in situ hybridization studies for genes of interest have been analyzed in separate series. To overcome this limitation and to increase the power of statistical analysis, this study focused on developing a double-label method to measure local cerebral blood flow (LCBF) and gene expressions simultaneously by means of a dual-autoradiography procedure. A 14C-iodoantipyrine autoradiographic LCBF study was first performed. Serial brain sections (12 in this study) were obtained at multiple coronal levels and were processed in the conventional manner to yield quantitative LCBF images. Two replicate sections at each bregma level were then used for in situ hybridization. To eliminate the 14C-iodoantipyrine from these sections, a chloroform-washout procedure was first performed. The sections were then processed for in situ hybridization autoradiography for the probes of interest. This method was tested in Wistar rats subjected to 12 min of global forebrain ischemia by two-vessel occlusion plus hypotension, followed by 2 or 6 h of reperfusion (n=4-6 per group). LCBF and in situ hybridization images for heat shock protein 70 (HSP70) were generated for each rat, aligned by disparity analysis, and analyzed on a pixel-by-pixel basis. This method yielded detailed inter-modality correlation between LCBF and HSP70 mRNA expressions. The advantages of this method include reducing the number of experimental animals by one-half; and providing accurate pixel-based correlations between different modalities in the same animals, thus enabling paired statistical analyses. This method can be extended to permit correlation of LCBF with the expression of multiple genes of interest.
Method of determining pH by the alkaline absorption of carbon dioxide
Hobbs, D.T.
1992-10-06
A method is described for measuring the concentration of hydroxides in alkaline solutions in a remote location using the tendency of hydroxides to absorb carbon dioxide. The method includes the passing of carbon dioxide over the surface of an alkaline solution in a remote tank before and after measurements of the carbon dioxide solution. A comparison of the measurements yields the absorption fraction from which the hydroxide concentration can be calculated using a correlation of hydroxide or pH to absorption fraction. 2 figs.
NASA Astrophysics Data System (ADS)
Lee, Taesam
2018-05-01
Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.
[The links between neuropsychology and neurophysiology].
Stolarska-Weryńska, Urszula; Biedroń, Agnieszka; Kaciński, Marek
2016-01-01
The aim of the study was to establish current scope of knowledge regarding associations between neurophysiological functioning, neuropsychology and psychoterapy. A systematic review was performed including 93 publications from Science Server, which contains the collections of Elsevier, Springer Journals, SCI-Ex/ICM, MEDLINE/PubMed, and SCOPUS. The works have been selected basing on following key words: 'neuropsychology, neurocognitive correlates, electrodermal response, event related potential, EEG, pupillography, electromiography' out of papers published between 2004-2015. Present reports on the use of neurophysiological methods in psychology can be divided into two areas: experimental research and research of the practical use of conditioning techniques and biofeedback in the treatment of somatic disease. Among the experimental research the following have been distinguished: research based on the startle reflex, physiological reaction to novelty, stress, type/amount of cognitive load and physiological correlates of emotion; research on the neurophysiological correlates of mental disorders, mostly mood and anxiety disorders, and neurocognitive correlates: of memory, attention, learning and intelligence. Among papers regarding the use of neurophysiological methods in psychology two types are the most frequent: on the mechanisms of biofeedback, related mainly to neuro- feedback, which is a quickly expanding method of various attention and mental disorders'treatment, and also research of the use of conditioning techniques in the treatment of mental disorders, especially depression and anxiety. A special place among all the above is taken by the research on electrophysiological correlates of psychotherapy, aiming to differentiate between the efficacy of various psychotherapeutic schools (the largest amount of publications regard the efficacy of cognitive-behavioral psychotherapy) in patients of different age groups and different diagnosis.
Is specific gravity a good estimate of urine osmolality?
Imran, Sethi; Eva, Goldwater; Christopher, Shutty; Flynn, Ethan; Henner, David
2010-01-01
Urine specific gravity (USG) is often used by clinicians to estimate urine osmolality. USG is measured either by refractometry or by reagent strip. We studied the correlation of USG obtained by either method with a concurrently obtained osmolality. Using our laboratory's records, we retrospectively gathered data on 504 urine specimens on patients on whom a simultaneously drawn USG and an osmolality were available. Out of these, 253 USG's were measured by automated refractometry and 251 USG's were measured by reagent strip. Urinalysis data on these subjects were used to determine the correlation between USG and osmolality, adjusting for other variables that may impact the relationship. The other variables considered were pH, protein, glucose, ketones, nitrates, bilirubin, urobilinogen, hemoglobin, and leukocyte esterase. The relationships were analyzed by linear regression. This study demonstrated that USG obtained by both reagent strip and refractometry had a correlation of approximately 0.75 with urine osmolality. The variables affecting the correlation included pH, ketones, bilirubin, urobilinogen, glucose, and protein for the reagent strip and ketones, bilirubin, and hemoglobin for the refractometry method. At a pH of 7 and with an USG of 1.010 predicted osmolality is approximately 300 mosm/kg/H(2)O for either method. For an increase in SG of 0.010, predicted osmolality increases by 182 mosm/kg/H(2) O for the reagent strip and 203 mosm/kg/H(2)O for refractometry. Pathological urines had significantly poorer correlation between USG and osmolality than "clean" urines. In pathological urines, direct measurement of urine osmolality should be used. © 2010 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook
1987-01-01
Various experimental, analytical, and numerical analysis methods for flow-solid interaction of a nest of cylinders subjected to cross flows are reviewed. A nest of cylinders subjected to cross flows can be found in numerous engineering applications including the Space Shuttle Maine Engine-Main Injector Assembly (SSME-MIA) and nuclear reactor heat exchangers. Despite its extreme importance in engineering applications, understanding of the flow-solid interaction process is quite limited and design of the tube banks are mostly dependent on experiments and/or experimental correlation equations. For future development of major numerical analysis methods for the flow-solid interaction of a nest of cylinders subjected to cross flow, various turbulence models, nonlinear structural dynamics, and existing laminar flow-solid interaction analysis methods are included.
NASA Technical Reports Server (NTRS)
Lindh, Roland; Rice, Julia E.; Lee, Timothy J.
1991-01-01
The energy separation between the classical and nonclassical forms of protonated acetylene has been reinvestigated in light of the recent experimentally deduced lower bound to this value of 6.0 kcal/mol. The objective of the present study is to use state-of-the-art ab initio quantum mechanical methods to establish this energy difference to within chemical accuracy (i.e., about 1 kcal/mol). The one-particle basis sets include up to g-type functions and the electron correlation methods include single and double excitation coupled-cluster (CCSD), the CCSD(T) extension, multireference configuration interaction, and the averaged coupled-pair functional methods. A correction for zero-point vibrational energies has also been included, yielding a best estimate for the energy difference between the classical and nonclassical forms of 3.7 + or - 1.3 kcal/mol.
Multimodality medical image database for temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost
2003-05-01
This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.
NASA Astrophysics Data System (ADS)
Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost
2003-01-01
This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.
NASA Astrophysics Data System (ADS)
Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor
2004-07-01
Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.
New insights into time series analysis. II - Non-correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2017-08-01
Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.
Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors
NASA Astrophysics Data System (ADS)
Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.
2012-12-01
Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.
NASA Astrophysics Data System (ADS)
Chen, Qi-Xiang; Yuan, Yuan; Huang, Xing; Jiang, Yan-Qiu; Tan, He-Ping
2017-06-01
Surface-level particulate matter is closely related to column aerosol optical thickness (AOT). Previous researches have successfully used column AOT and different meteorological parameters to estimate surface-level PM concentration. In this study, the performance of a selected linear model that estimates surface-level PM2.5 concentration was evaluated following the aerosol type analysis method (ATAM) for the first time. We utilized 443 daily average data for Xuzhou, Jiangsu province, collected using Aerosol Robotic Network (AERONET) during the period October 2013 to April 2016. Several parameters including atmospheric boundary layer height (BLH), relative humidity (RH), and effective radius of the aerosol size distribution (Ref) were used to assess the relationship between the column AOT and PM2.5 concentration. By including the BLH, ambient RH, and effective radius, the correlation (R2) increased from 0.084 to 0.250 at Xuzhou, and with the use of ATAM, the correlation increased further to 0.335. To compare the results, 450 daily average data for Beijing, pertaining to the same period, were utilized. The study found that model correlations improved by varying degrees in different seasons and at different sites following ATAM. The average urban industry (UI) aerosol ratios at Xuzhou and Beijing were 0.792 and 0.451, respectively, demonstrating poorer air conditions at Xuzhou. PM2.5 estimation at Xuzhou showed lower correlation (R2 = 0.335) compared to Beijing (R2 = 0.407), and the increase of R2 at Xuzhou and Beijing site following use of ATAM were 33.8% and 12.4%, respectively.
A descriptive systematic review of salivary therapeutic drug monitoring in neonates and infants.
Hutchinson, Laura; Sinclair, Marlene; Reid, Bernadette; Burnett, Kathryn; Callan, Bridgeen
2018-06-01
Saliva, as a matrix, offers many benefits over blood in therapeutic drug monitoring (TDM), in particular for infantile TDM. However, the accuracy of salivary TDM in infants remains an area of debate. This review explored the accuracy, applicability and advantages of using saliva TDM in infants and neonates. Databases were searched up to and including September 2016. Studies were included based on PICO as follows: P: infants and neonates being treated with any medication, I: salivary TDM vs. C: traditional methods and O: accuracy, advantages/disadvantages and applicability to practice. Compounds were assessed by their physicochemical and pharmacokinetic properties, as well as published quantitative saliva monitoring data. Twenty-four studies and their respective 13 compounds were investigated. Four neutral and two acidic compounds, oxcarbazepine, primidone, fluconazole, busulfan, theophylline and phenytoin displayed excellent/very good correlation between blood plasma and saliva. Lamotrigine was the only basic compound to show excellent correlation with morphine exhibiting no correlation between saliva and blood plasma. Any compound with an acid dissociation constant (pKa) within physiological range (pH 6-8) gave a more varied response. There is significant potential for infantile saliva testing and in particular for neutral and weakly acidic compounds. Of the properties investigated, pKa was the most influential with both logP and protein binding having little effect on this correlation. To conclude, any compound with a pKa within physiological range (pH 6-8) should be considered with extra care, with the extraction and analysis method examined and optimized on a case-by-case basis. © 2018 The British Pharmacological Society.
Lex-SVM: exploring the potential of exon expression profiling for disease classification.
Yuan, Xiongying; Zhao, Yi; Liu, Changning; Bu, Dongbo
2011-04-01
Exon expression profiling technologies, including exon arrays and RNA-Seq, measure the abundance of every exon in a gene. Compared with gene expression profiling technologies like 3' array, exon expression profiling technologies could detect alterations in both transcription and alternative splicing, therefore they are expected to be more sensitive in diagnosis. However, exon expression profiling also brings higher dimension, more redundancy, and significant correlation among features. Ignoring the correlation structure among exons of a gene, a popular classification method like L1-SVM selects exons individually from each gene and thus is vulnerable to noise. To overcome this limitation, we present in this paper a new variant of SVM named Lex-SVM to incorporate correlation structure among exons and known splicing patterns to promote classification performance. Specifically, we construct a new norm, ex-norm, including our prior knowledge on exon correlation structure to regularize the coefficients of a linear SVM. Lex-SVM can be solved efficiently using standard linear programming techniques. The advantage of Lex-SVM is that it can select features group-wisely, force features in a subgroup to take equal weihts and exclude the features that contradict the majority in the subgroup. Experimental results suggest that on exon expression profile, Lex-SVM is more accurate than existing methods. Lex-SVM also generates a more compact model and selects genes more consistently in cross-validation. Unlike L1-SVM selecting only one exon in a gene, Lex-SVM assigns equal weights to as many exons in a gene as possible, lending itself easier for further interpretation.
Cubero Gómez, Jose M; Navarro Puerto, María A; Acosta Martínez, Juan; De Mier Barragán, María I; Pérez Santigosa, Pastor L; Sánchez Burguillos, Francisco; Molano Casimiro, Francisco; Pastor Torres, Luis
2014-07-01
Impaired response to antiplatelet therapy in diabetic patients results in a higher incidence of drug-eluting stent thrombosis. This study determined the prevalence of high on-aspirin (AS) platelet reactivity in type 2 diabetic patients treated with percutaneous coronary intervention (PCI) using the VerifyNow Aspirin Assay (VN) and platelet function analyzer PFA-100 (PFA-100) and analyzed the correlation between both methods. Type 2 diabetic patients (100) with non-ST-elevation acute coronary syndrome who underwent PCI and Xience V drug-eluting stent implantation were included in this study. After PCI, platelet antiaggregation mediated by acetylsalicylic acid was assessed by VN and PFA-100. The degree of correlation and concordance was then determined. When assayed with VN, 7% of the patients were nonresponders to aspirin (aspirin reaction units >550), and when assayed with PFA-10, 41% were nonresponders (closure time <193 seconds). Of the patients, 4% were nonresponders to aspirin according to VN but were sensitive to aspirin according to PFA-100, and 38% were sensitive to aspirin according to VN and nonresponders according to PFA-100. Overall, 55% of the patients were aspirin-sensitive in both methods. The Spearman's coefficient between VN and PFA-100 results was r = 0.09 (P = 0.35). The kappa index value was 0.0062 (P = 0.91). There is no concordance or correlation between the VN and PFA-100 results. Therefore, the use of these analyses should be restricted to clinical research, which limits its application in clinical practice.
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic
2018-07-01
Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.
Raghunandhan, S; Ravikumar, A; Kameswaran, Mohan; Mandke, Kalyani; Ranjith, R
2014-05-01
Indications for cochlear implantation have expanded today to include very young children and those with syndromes/multiple handicaps. Programming the implant based on behavioural responses may be tedious for audiologists in such cases, wherein matching an effective Measurable Auditory Percept (MAP) and appropriate MAP becomes the key issue in the habilitation program. In 'Difficult to MAP' scenarios, objective measures become paramount to predict optimal current levels to be set in the MAP. We aimed to (a) study the trends in multi-modal electrophysiological tests and behavioural responses sequentially over the first year of implant use; (b) generate normative data from the above; (c) correlate the multi-modal electrophysiological thresholds levels with behavioural comfort levels; and (d) create predictive formulae for deriving optimal comfort levels (if unknown), using linear and multiple regression analysis. This prospective study included 10 profoundly hearing impaired children aged between 2 and 7 years with normal inner ear anatomy and no additional handicaps. They received the Advanced Bionics HiRes 90 K Implant with Harmony Speech processor and used HiRes-P with Fidelity 120 strategy. They underwent, impedance telemetry, neural response imaging, electrically evoked stapedial response telemetry (ESRT), and electrically evoked auditory brainstem response (EABR) tests at 1, 4, 8, and 12 months of implant use, in conjunction with behavioural mapping. Trends in electrophysiological and behavioural responses were analyzed using paired t-test. By Karl Pearson's correlation method, electrode-wise correlations were derived for neural response imaging (NRI) thresholds versus most comfortable level (M-levels) and offset based (apical, mid-array, and basal array) correlations for EABR and ESRT thresholds versus M-levels were calculated over time. These were used to derive predictive formulae by linear and multiple regression analysis. Such statistically predicted M-levels were compared with the behaviourally recorded M-levels among the cohort, using Cronbach's alpha reliability test method for confirming the efficacy of this method. NRI, ESRT, and EABR thresholds showed statistically significant positive correlations with behavioural M-levels, which improved with implant use over time. These correlations were used to derive predicted M-levels using regression analysis. On an average, predicted M-levels were found to be statistically reliable and they were a fair match to the actual behavioural M-levels. When applied in clinical practice, the predicted values were found to be useful for programming members of the study group. However, individuals showed considerable deviations in behavioural M-levels, above and below the electrophysiologically predicted values, due to various factors. While the current method appears helpful as a reference to predict initial maps in 'difficult to Map' subjects, it is recommended that behavioural measures are mandatory to further optimize the maps for these individuals. The study explores the trends, correlations and individual variabilities that occur between electrophysiological tests and behavioural responses, recorded over time among a cohort of cochlear implantees. The statistical method shown may be used as a guideline to predict optimal behavioural levels in difficult situations among future implantees, bearing in mind that optimal M-levels for individuals can vary from predicted values. In 'Difficult to MAP' scenarios, following a protocol of sequential behavioural programming, in conjunction with electrophysiological correlates will provide the best outcomes.
Ozdemir, Gülsün; Kaya, Hatice
2013-06-01
Methods learnt by nursing and midwifery students' such as communication skills, optimisim and coping with stress would be used in their profeesional life. It is very important to promote their positive thinking and communication skills to raise coping with stress. This cross sectional study was performed to examine the nursing and midwifery students' communication skills and optimistic life orientation and its correlation with coping strategies with stress. The study population included 2572 students who were studying in departments of nursing and midwifery in Istanbul. The sample was included 1419 students. Three questionnaires including Communication Skills Test, Life Orientation Test and Ways of Coping Inventory were used for data collection. The data were evaluated by calculating frequency, percentage, arithmetic mean, standard deviation and Pearson correlation coefficient. Students' total mean score from the Communication Skills Scale was 165.27 ± 15.39 and for the Life Orientation Test was 18.51 ± 4.54. There was a positive correlation between their Life Orientation scores and the scores for self confidence (r = 0.34, P < 0.001), optimistic approach (r = 0.42, P < 0.001), and seeking social help (r = 0.17, P < 0.001). Also there was a significant positive correlation between Communication skill scores and self confidence (r = 0.46, P < 0.001), optimistic (r = 0.37, P < 0.001) and seeking social help approaches (r = 0.29, P < 0.001), but there was a significant negative correlation between communication skill scores and scores for helpless (r = -0.29, P < 0.001) and submissive approaches (r = -0.36, P < 0.001). As scores of students in optimistic life orientation and communication skills increased self confidence approach, optimistic, and social support seeking scores increased, whereas helpless, and submissive scores decreased.
Estimation of tunnel blockage from wall pressure signatures: A review and data correlation
NASA Technical Reports Server (NTRS)
Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.
1979-01-01
A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.
Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.
Leung, Denis H Y; Wang, You-Gan; Zhu, Min
2009-07-01
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
New applications of renormalization group methods in nuclear physics.
Furnstahl, R J; Hebeler, K
2013-12-01
We review recent developments in the use of renormalization group (RG) methods in low-energy nuclear physics. These advances include enhanced RG technology, particularly for three-nucleon forces, which greatly extends the reach and accuracy of microscopic calculations. We discuss new results for the nucleonic equation of state with applications to astrophysical systems such as neutron stars, new calculations of the structure and reactions of finite nuclei, and new explorations of correlations in nuclear systems.
Estimating Local and Near-Regional Velocity and Attenuation Structure from Seismic Noise
2008-09-30
seismic array in Costa Rica and Nicaragua from ambient seismic noise using two independent methods, noise cross correlation and beamforming. The noise...Mean-phase velocity-dispersion curves are calculated for the TUCAN seismic array in Costa Rica and Nicaragua from ambient seismic noise using two...stations of the TUCAN seismic array (Figure 4c) using a method similar to Harmon et al. (2007). Variations from Harmon et al. (2007) include removing the
Power-law behaviour evaluation from foreign exchange market data using a wavelet transform method
NASA Astrophysics Data System (ADS)
Wei, H. L.; Billings, S. A.
2009-09-01
Numerous studies in the literature have shown that the dynamics of many time series including observations in foreign exchange markets exhibit scaling behaviours. A simple new statistical approach, derived from the concept of the continuous wavelet transform correlation function (WTCF), is proposed for the evaluation of power-law properties from observed data. The new method reveals that foreign exchange rates obey power-laws and thus belong to the class of self-similarity processes.
Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kipp, C. R.; Bernhard, R. J.
1985-01-01
A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.
NASA Technical Reports Server (NTRS)
Kuhn, Richard E.; Bellavia, David C.; Corsiglia, Victor R.; Wardwell, Douglas A.
1991-01-01
Currently available methods for estimating the net suckdown induced on jet V/STOL aircraft hovering in ground effect are based on a correlation of available force data and are, therefore, limited to configurations similar to those in the data base. Experience with some of these configurations has shown that both the fountain lift and additional suckdown are overestimated but these effects cancel each other for configurations within the data base. For other configurations, these effects may not cancel and the net suckdown could be grossly overestimated or underestimated. Also, present methods do not include the prediction of the pitching moments associated with the suckdown induced in ground effect. An attempt to develop a more logically based method for estimating the fountain lift and suckdown based on the jet-induced pressures is initiated. The analysis is based primarily on the data from a related family of three two-jet configurations (all using the same jet spacing) and limited data from two other two-jet configurations. The current status of the method, which includes expressions for estimating the maximum pressure induced in the fountain regions, and the sizes of the fountain and suckdown regions is presented. Correlating factors are developed to be used with these areas and pressures to estimate the fountain lift, the suckdown, and the related pitching moment increments.
Structural Genomics: Correlation Blocks, Population Structure, and Genome Architecture
Hu, Xin-Sheng; Yeh, Francis C.; Wang, Zhiquan
2011-01-01
An integration of the pattern of genome-wide inter-site associations with evolutionary forces is important for gaining insights into the genomic evolution in natural or artificial populations. Here, we assess the inter-site correlation blocks and their distributions along chromosomes. A correlation block is broadly termed as the DNA segment within which strong correlations exist between genetic diversities at any two sites. We bring together the population genetic structure and the genomic diversity structure that have been independently built on different scales and synthesize the existing theories and methods for characterizing genomic structure at the population level. We discuss how population structure could shape correlation blocks and their patterns within and between populations. Effects of evolutionary forces (selection, migration, genetic drift, and mutation) on the pattern of genome-wide correlation blocks are discussed. In eukaryote organisms, we briefly discuss the associations between the pattern of correlation blocks and genome assembly features in eukaryote organisms, including the impacts of multigene family, the perturbation of transposable elements, and the repetitive nongenic sequences and GC-rich isochores. Our reviews suggest that the observable pattern of correlation blocks can refine our understanding of the ecological and evolutionary processes underlying the genomic evolution at the population level. PMID:21886455
Signatures of van der Waals binding: A coupling-constant scaling analysis
NASA Astrophysics Data System (ADS)
Jiao, Yang; Schröder, Elsebeth; Hyldgaard, Per
2018-02-01
The van der Waals (vdW) density functional (vdW-DF) method [Rep. Prog. Phys. 78, 066501 (2015), 10.1088/0034-4885/78/6/066501] describes dispersion or vdW binding by tracking the effects of an electrodynamic coupling among pairs of electrons and their associated exchange-correlation holes. This is done in a nonlocal-correlation energy term Ecnl, which permits density functional theory calculation in the Kohn-Sham scheme. However, to map the nature of vdW forces in a fully interacting materials system, it is necessary to also account for associated kinetic-correlation energy effects. Here, we present a coupling-constant scaling analysis, which permits us to compute the kinetic-correlation energy Tcnl that is specific to the vdW-DF account of nonlocal correlations. We thus provide a more complete spatially resolved analysis of the electrodynamical-coupling nature of nonlocal-correlation binding, including vdW attraction, in both covalently and noncovalently bonded systems. We find that kinetic-correlation energy effects play a significant role in the account of vdW or dispersion interactions among molecules. Furthermore, our mapping shows that the total nonlocal-correlation binding is concentrated to pockets in the sparse electron distribution located between the material fragments.
Zarkovic, Andrea; Mora, Justin; McKelvie, James; Gamble, Greg
2007-12-01
The aim of the study was to establish the correlation between visual filed loss as shown by second-generation Frequency Doubling Technology (Humphrey Matrix) and Standard Automated Perimetry (Humphrey Field Analyser) in patients with glaucoma. Also, compared were the test duration and reliability. Forty right eyes from glaucoma patients from a private ophthalmology practice were included in this prospective study. All participants had tests within an 8-month period. Pattern deviation plots and mean deviation were compared to establish the correlation between the two perimetry tests. Overall correlation and correlation between hemifields, quadrants and individual test locations were assessed. Humphrey Field Analyser tests were slightly more reliable (37/40 vs. 34/40 for Matrix)) but overall of longer duration. There was good correlation (0.69) between mean deviations. Superior hemifields and superonasal quadrants had the highest correlation (0.88 [95% CI 0.79, 0.94]). Correlation between individual points was independent of distance from the macula. Generally, the Matrix and Humphrey Field Analyser perimetry correlate well; however, each machine utilizes a different method of analysing data and thus the direct comparison should be made with caution.
Modal energy analysis for mechanical systems excited by spatially correlated loads
NASA Astrophysics Data System (ADS)
Zhang, Peng; Fei, Qingguo; Li, Yanbin; Wu, Shaoqing; Chen, Qiang
2018-10-01
MODal ENergy Analysis (MODENA) is an energy-based method, which is proposed to deal with vibroacoustic problems. The performance of MODENA on the energy analysis of a mechanical system under spatially correlated excitation is investigated. A plate/cavity coupling system excited by a pressure field is studied in a numerical example, in which four kinds of pressure fields are involved, which include the purely random pressure field, the perfectly correlated pressure field, the incident diffuse field, and the turbulent boundary layer pressure fluctuation. The total energies of subsystems differ to reference solution only in the case of purely random pressure field and only for the non-excited subsystem (the cavity). A deeper analysis on the scale of modal energy is further conducted via another numerical example, in which two structural modes excited by correlated forces are coupled with one acoustic mode. A dimensionless correlation strength factor is proposed to determine the correlation strength between modal forces. Results show that the error on modal energy increases with the increment of the correlation strength factor. A criterion is proposed to establish a link between the error and the correlation strength factor. According to the criterion, the error is negligible when the correlation strength is weak, in this situation the correlation strength factor is less than a critical value.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José
2018-03-28
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
Turbutt, Claire; Richardson, Janet; Pettinger, Clare
2018-03-24
Obesity is the greatest health issue for this generation; schools have improved food offered within their grounds. The built environment surrounding schools and pupils' journeys home have not received the same level of attention. This review identified papers on impacts of hot food takeaways surrounding schools in the UK. Methods were informed by the PRISMA (QUORUM) guidelines for systematic reviews. Searches were completed in 12 databases. A total of 14 papers were included and quality assured before data extraction. Three descriptive themes were found; descriptions of hot food takeaway's geography and impacts concerning schools, strategic food policy and pupils reported food behaviour. Most included studies compared anthropometric measures with geographical location of hot food takeaways to find correlations between environment and childhood obesity. There was good evidence of more hot food takeaways in deprived areas and children who spend time in deprived neighbourhoods tend to eat more fast food and have higher BMIs. Few studies were able to quantify the correlation between school's environment and obesity amongst pupils. This lack of evidence is likely a factor of the studies' ability to identify the correlation rather than lack of a correlation between the two variables.
Television Time among Brazilian Adolescents: Correlated Factors are Different between Boys and Girls
Tremblay, Mark Stephen; Gonçalves, Eliane Cristina de Andrade; Silva, Roberto Jerônimo dos Santos
2014-01-01
Objective. The aim of this study was to identify the prevalence of excess television time and verify correlated factors in adolescent males and females. Methods. This cross-sectional study included 2,105 adolescents aged from 13 to 18 years from the city of Aracaju, Northeastern Brazil. Television time was self-reported, corresponding to the time spent watching television in a typical week. Several correlates were examined including age, skin color, socioeconomic status, parent education, physical activity level, consumption of fruits and vegetables, smoking status, alcohol use, and sports team participation. Results. The prevalence excess television time (≥2 hours/day) in girls and boys was 70.9% and 66.2%, respectively. Girls with low socioeconomic status or inadequate consumption of fruits and vegetables were more likely to have excess television time. Among boys, those >16 years of age or with black skin color were more likely to have excess television time. Conclusions. Excess television time was observed in more than two-thirds of adolescents, being more evident in girls. Correlated factors differed according to sex. Efforts to reduce television time among Brazilian adolescents, and replace with more active pursuits, may yield desirable public health benefits. PMID:24723826
Bhattacharya, Anindya; De, Rajat K
2010-08-01
Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to be consulted before installation and execution of the software. Copyright 2010 Elsevier Inc. All rights reserved.
Anharmonicity Rise the Thermal Conductivity in Amorphous Silicon
NASA Astrophysics Data System (ADS)
Lv, Wei; Henry, Asegun
We recently proposed a new method called Direct Green-Kubo Modal Analysis (GKMA) method, which has been shown to calculate the thermal conductivity (TC) of several amorphous materials accurately. A-F method has been widely used for amorphous materials. However, researchers have found out that it failed on several different materials. The missing component of A-F method is the harmonic approximation and considering only the interactions of modes with similar frequencies, which neglect interactions of modes with large frequency difference. On the contrary, GKMA method, which is based on molecular dynamics, intrinsically includes all types of phonon interactions. In GKMA method, each mode's TC comes from both mode self-correlations (autocorrelations) and mode-mode correlations (crosscorrelations). We have demonstrated that the GKMA predicted TC of a-Si from Tersoff potential is in excellent agreement with one of experimental results. In this work, we will present the GKMA applications on a-Si using multiple potentials and gives us more insight of the effect of anharmonicity on the TC of amorphous silicon. This research was supported Intel grant AGMT DTD 1-15-13 and computational resources by NSF supported XSEDE resources under allocations DMR130105 and TG- PHY130049.
NASA Astrophysics Data System (ADS)
Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen
2010-12-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Guiot, R.; Wunnenberg, H.
1980-01-01
The methods by which aerodynamic coefficients are determined and discussed. These include: calculations, wind tunnel experiments and experiments in flight for various prototypes of the Alpha Jet. A comparison of obtained results shows good correlation between expectations and in-flight test results.
Strategies for Dealing with Stress: Taking Care of Yourself.
ERIC Educational Resources Information Center
Gmelch, Walter H.
University department chairs need to manage stress to their advantage. Myths pertaining to stress include: (1) stress is harmful; (2) stress should be avoided; (3) stress correlates with level of responsibility; (4) stress is predominantly a male phenomenon; and (5) there is one appropriate coping method. The Chair Stress Cycle provides a broad…
Ten-Year Trends in Physical Dating Violence Victimization?among?US?Adolescent?Females
ERIC Educational Resources Information Center
Howard, Donna E.; Debnam, Katrina J.; Wang, Min Q.
2013-01-01
Background: The study provides 10-year trend data on the psychosocial correlates of physical dating violence (PDV) victimization among females who participated in the national Youth Risk Behavior Surveys of US high school students between 1999 and 2009. Methods: The dependent variable was PDV. Independent variables included 4 dimensions: violence,…
ERIC Educational Resources Information Center
Choi, Daisi; Tolova, Vera; Socha, Edward; Samenow, Charles P.
2013-01-01
Objective: This study sought to examine how specific substance-use behavior, including nonmedical prescription stimulant (NPS) use, among U.S. medical students correlates with their attitudes and beliefs toward professionalism. Method: An anonymous survey was distributed to all medical students at a private medical university (46% response rate).…
2013-08-01
cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort
A More Powerful Test in Three-Level Cluster Randomized Designs
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2011-01-01
Field experiments that involve nested structures frequently assign treatment conditions to entire groups (such as schools). A key aspect of the design of such experiments includes knowledge of the clustering effects that are often expressed via intraclass correlation. This study provides methods for constructing a more powerful test for the…
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
Fotina, I; Lütgendorf-Caucig, C; Stock, M; Pötter, R; Georg, D
2012-02-01
Inter-observer studies represent a valid method for the evaluation of target definition uncertainties and contouring guidelines. However, data from the literature do not yet give clear guidelines for reporting contouring variability. Thus, the purpose of this work was to compare and discuss various methods to determine variability on the basis of clinical cases and a literature review. In this study, 7 prostate and 8 lung cases were contoured on CT images by 8 experienced observers. Analysis of variability included descriptive statistics, calculation of overlap measures, and statistical measures of agreement. Cross tables with ratios and correlations were established for overlap parameters. It was shown that the minimal set of parameters to be reported should include at least one of three volume overlap measures (i.e., generalized conformity index, Jaccard coefficient, or conformation number). High correlation between these parameters and scatter of the results was observed. A combination of descriptive statistics, overlap measure, and statistical measure of agreement or reliability analysis is required to fully report the interrater variability in delineation.
Load-embedded inertial measurement unit reveals lifting performance.
Tammana, Aditya; McKay, Cody; Cain, Stephen M; Davidson, Steven P; Vitali, Rachel V; Ojeda, Lauro; Stirling, Leia; Perkins, Noel C
2018-07-01
Manual lifting of loads arises in many occupations as well as in activities of daily living. Prior studies explore lifting biomechanics and conditions implicated in lifting-induced injuries through laboratory-based experimental methods. This study introduces a new measurement method using load-embedded inertial measurement units (IMUs) to evaluate lifting tasks in varied environments outside of the laboratory. An example vertical load lifting task is considered that is included in an outdoor obstacle course. The IMU data, in the form of the load acceleration and angular velocity, is used to estimate load vertical velocity and three lifting performance metrics: the lifting time (speed), power, and motion smoothness. Large qualitative differences in these parameters distinguish exemplar high and low performance trials. These differences are further supported by subsequent statistical analyses of twenty three trials (including a total of 115 total lift/lower cycles) from fourteen healthy participants. Results reveal that lifting time is strongly correlated with lifting power (as expected) but also correlated with motion smoothness. Thus, participants who lift rapidly do so with significantly greater power using motions that minimize motion jerk. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kinjo, Masataka
2018-01-01
Neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), Alzheimer’s disease, Parkinson’s disease, and Huntington’s disease, are devastating proteinopathies with misfolded protein aggregates accumulating in neuronal cells. Inclusion bodies of protein aggregates are frequently observed in the neuronal cells of patients. Investigation of the underlying causes of neurodegeneration requires the establishment and selection of appropriate methodologies for detailed investigation of the state and conformation of protein aggregates. In the current review, we present an overview of the principles and application of several methodologies used for the elucidation of protein aggregation, specifically ones based on determination of fluctuations of fluorescence. The discussed methods include fluorescence correlation spectroscopy (FCS), imaging FCS, image correlation spectroscopy (ICS), photobleaching ICS (pbICS), number and brightness (N&B) analysis, super-resolution optical fluctuation imaging (SOFI), and transient state (TRAST) monitoring spectroscopy. Some of these methodologies are classical protein aggregation analyses, while others are not yet widely used. Collectively, the methods presented here should help the future development of research not only into protein aggregation but also neurodegenerative diseases. PMID:29570669
Structure and Dynamics Analysis on Plexin-B1 Rho GTPase Binding Domain as a Monomer and Dimer
2015-01-01
Plexin-B1 is a single-pass transmembrane receptor. Its Rho GTPase binding domain (RBD) can associate with small Rho GTPases and can also self-bind to form a dimer. In total, more than 400 ns of NAMD molecular dynamics simulations were performed on RBD monomer and dimer. Different analysis methods, such as root mean squared fluctuation (RMSF), order parameters (S2), dihedral angle correlation, transfer entropy, principal component analysis, and dynamical network analysis, were carried out to characterize the motions seen in the trajectories. RMSF results show that after binding, the L4 loop becomes more rigid, but the L2 loop and a number of residues in other regions become slightly more flexible. Calculating order parameters (S2) for CH, NH, and CO bonds on both backbone and side chain shows that the L4 loop becomes essentially rigid after binding, but part of the L1 loop becomes slightly more flexible. Backbone dihedral angle cross-correlation results show that loop regions such as the L1 loop including residues Q25 and G26, the L2 loop including residue R61, and the L4 loop including residues L89–R91, are highly correlated compared to other regions in the monomer form. Analysis of the correlated motions at these residues, such as Q25 and R61, indicate two signal pathways. Transfer entropy calculations on the RBD monomer and dimer forms suggest that the binding process should be driven by the L4 loop and C-terminal. However, after binding, the L4 loop functions as the motion responder. The signal pathways in RBD were predicted based on a dynamical network analysis method using the pathways predicted from the dihedral angle cross-correlation calculations as input. It is found that the shortest pathways predicted from both inputs can overlap, but signal pathway 2 (from F90 to R61) is more dominant and overlaps all of the routes of pathway 1 (from F90 to P111). This project confirms the allosteric mechanism in signal transmission inside the RBD network, which was in part proposed in the previous experimental study. PMID:24901636
Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Mafalda; Frazer, Jonathan; Mulryne, David J.
2016-12-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantummore » and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or 'in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with | f {sub NL}| of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.« less
Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Mulryne, David J.; Seery, David
2016-12-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or `in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with |fNL| of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.
Observations of strong ion-ion correlations in dense plasmas
Ma, T.; Fletcher, L.; Pak, A.; ...
2014-04-24
Using simultaneous spectrally, angularly, and temporally resolved x-ray scattering, we measure the pronounced ion-ion correlation peak in a strongly coupled plasma. Laser-driven shock-compressed aluminum at ~3× solid density is probed with high-energy photons at 17.9 keV created by molybdenum He-α emission in a laser-driven plasma source. The measured elastic scattering feature shows a well-pronounced correlation peak at a wave vector of k=4Å –1. The magnitude of this correlation peak cannot be described by standard plasma theories employing a linear screened Coulomb potential. Advanced models, including a strong short-range repulsion due to the inner structure of the aluminum ions are howevermore » in good agreement with the scattering data. These studies have demonstrated a new highly accurate diagnostic technique to directly measure the state of compression and the ion-ion correlations. Furthermore, we have since applied this new method in single-shot wave-number resolved S(k) measurements to characterize the physical properties of dense plasmas.« less
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
Effectiveness of feature and classifier algorithms in character recognition systems
NASA Astrophysics Data System (ADS)
Wilson, Charles L.
1993-04-01
At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.
Σ--antihyperon correlations in Z0 decay and investigation of the baryon production mechanism
NASA Astrophysics Data System (ADS)
Abbiendi, G.; Ainsley, C.; Åkesson, P. F.; Alexander, G.; Anagnostou, G.; Anderson, K. J.; Asai, S.; Axen, D.; Bailey, I.; Barberio, E.; Barillari, T.; Barlow, R. J.; Batley, R. J.; Bechtle, P.; Behnke, T.; Bell, K. W.; Bell, P. J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brown, R. M.; Burckhart, H. J.; Campana, S.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Ciocca, C.; Csilling, A.; Cuffiani, M.; Dado, S.; Dallavalle, M.; de Roeck, A.; de Wolf, E. A.; Desch, K.; Dienes, B.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I. P.; Etzion, E.; Fabbri, F.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Gagnon, P.; Gary, J. W.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, M.; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwé, M.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G. G.; Harel, A.; Hauschild, M.; Hawkes, C. M.; Hawkings, R.; Herten, G.; Heuer, R. D.; Hill, J. C.; Horváth, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jovanovic, P.; Junk, T. R.; Kanzaki, J.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R. K.; Kellogg, R. G.; Kennedy, B. W.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Krämer, T.; Krasznahorkay, A.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kupper, M.; Lafferty, G. D.; Landsman, H.; Lanske, D.; Lellouch, D.; Letts, J.; Levinson, L.; Lillich, J.; Lloyd, S. L.; Loebinger, F. K.; Lu, J.; Ludwig, A.; Ludwig, J.; Mader, W.; Marcellini, S.; Martin, A. J.; Mashimo, T.; Mättig, P.; McKenna, J.; McPherson, R. A.; Meijers, F.; Menges, W.; Merritt, F. S.; Mes, H.; Meyer, N.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D. J.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H. A.; O'Neale, S. W.; Oh, A.; Oreglia, M. J.; Orito, S.; Pahl, C.; Pásztor, G.; Pater, J. R.; Pilcher, J. E.; Pinfold, J.; Plane, D. E.; Pooth, O.; Przybycień, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Roney, J. M.; Rossi, A. M.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E. K. G.; Schaile, A. D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schörner-Sadenius, T.; Schröder, M.; Schumacher, M.; Seuster, R.; Shears, T. G.; Shen, B. C.; Sherwood, P.; Skuja, A.; Smith, A. M.; Sobie, R.; Söldner-Rembold, S.; Spano, F.; Stahl, A.; Strom, D.; Ströhmer, R.; Tarem, S.; Tasevsky, M.; Teuscher, R.; Thomson, M. A.; Torrence, E.; Toya, D.; Trigger, I.; Trócsányi, Z.; Tsur, E.; Turner-Watson, M. F.; Ueda, I.; Ujvári, B.; Vollmer, C. F.; Vannerem, P.; Vértesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Ward, C. P.; Ward, D. R.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Wells, P. S.; Wengler, T.; Wermes, N.; Wetterling, D.; Wilson, G. W.; Wilson, J. A.; Wolf, G.; Wyatt, T. R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, L.
2009-12-01
Data collected around sqrt{s}=91 GeV by the OPAL experiment at the LEP e+e- collider are used to study the mechanism of baryon formation. As the signature, the fraction of Σ- hyperons whose baryon number is compensated by the production of a overline{Σ-},overline{Λ} or overline{Ξ-} antihyperon is determined. The method relies entirely on quantum number correlations of the baryons, and not rapidity correlations, making it more model independent than previous studies. Within the context of the JETSET implementation of the string hadronization model, the diquark baryon production model without the popcorn mechanism is strongly disfavored with a significance of 3.8 standard deviations including systematic uncertainties. It is shown that previous studies of the popcorn mechanism with Λ overline{Λ} and p\\uppi overline{p} correlations are not conclusive, if parameter uncertainties are considered.
Dalmolin, Graziele de Lima; Lunardi, Valéria Lerch; Lunardi, Guilherme Lerch; Barlem, Edison Luiz Devos; da Silveira, Rosemary Silva
2014-01-01
Objective to identify relationships between moral distress and Burnout in the professional performance from the perceptions of the experiences of nursing workers. Methods this is a survey type study with 375 nursing workers working in three different hospitals of southern Rio Grande do Sul, with the application of adaptations of the Moral Distress Scale and the Maslach Burnout Inventory, validated and standardized for use in Brazil. Data validation occurred through factor analysis and Cronbach's alpha. For the data analysis bivariate analysis using Pearson's correlation and multivariate analysis using multiple regression were performed. Results the existence of a weak correlation between moral distress and Burnout was verified. A possible positive correlation between Burnout and therapeutic obstinacy, and a negative correlation between professional fulfillment and moral distress were identified. Conclusion the need was identified for further studies that include mediating and moderating variables that may explain more clearly the models studied. PMID:24553701
Assessment of Communications-related Admissions Criteria in a Three-year Pharmacy Program
Tejada, Frederick R.; Lang, Lynn A.; Purnell, Miriam; Acedera, Lisa; Ngonga, Ferdinand
2015-01-01
Objective. To determine if there is a correlation between TOEFL and other admissions criteria that assess communications skills (ie, PCAT variables: verbal, reading, essay, and composite), interview, and observational scores and to evaluate TOEFL and these admissions criteria as predictors of academic performance. Methods. Statistical analyses included two sample t tests, multiple regression and Pearson’s correlations for parametric variables, and Mann-Whitney U for nonparametric variables, which were conducted on the retrospective data of 162 students, 57 of whom were foreign-born. Results. The multiple regression model of the other admissions criteria on TOEFL was significant. There was no significant correlation between TOEFL scores and academic performance. However, significant correlations were found between the other admissions criteria and academic performance. Conclusion. Since TOEFL is not a significant predictor of either communication skills or academic success of foreign-born PharmD students in the program, it may be eliminated as an admissions criterion. PMID:26430273
New method for identifying features of an image on a digital video display
NASA Astrophysics Data System (ADS)
Doyle, Michael D.
1991-04-01
The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4
Dimensions of Attention Associated With the Microstructure of Corona Radiata White Matter.
Stave, Elise A; De Bellis, Michael D; Hooper, Steven R; Woolley, Donald P; Chang, Suk Ki; Chen, Steven D
2017-04-01
Mirsky proposed a model of attention that included these dimensions: focus/execute, sustain, stabilize, encode, and shift. The neural correlates of these dimensions were investigated within corona radiata subregions in healthy youth. Diffusion tensor imaging and neuropsychological assessments were conducted in 79 healthy, right-handed youth aged 4-17 years. Diffusion tensor imaging maps were analyzed using standardized parcellation methods. Partial Pearson correlations between neuropsychological standardized scores, representing these attention dimensions, and diffusion tensor imaging measures of corona radiata subregions were calculated after adjusting for gender and IQ. Significant correlations were found between the focus/execute, sustain, stabilize, and shift dimensions and imaging metrics in hypothesized corona radiata subregions. Results suggest that greater microstructural white matter integrity of the corona radiata is partly associated with attention across 4 attention dimensions. Findings suggest that white matter microstructure of the corona radiata is a neural correlate of several, but not all, attention dimensions.
Dimensions of Attention Associated with the Microstructure of Corona Radiata White Matter
Stave, Elise A.; Hooper, Stephen R.; Woolley, Donald P.; Chang, Suk Ki; Chen, Steven D.
2016-01-01
Mirsky proposed a model of attention that included these dimensions: focus/execute, sustain, stabilize, encode, and shift. The neural correlates of these dimensions were investigated within corona radiate subregions in healthy youth. Diffusion tensor imaging and neuropsychological assessments were conducted in 79 healthy, right-handed youth aged 4–17 years. Diffusion tensor imaging maps were analyzed using standardized parcellation methods. Partial Pearson correlations between neuropsychological standardized scores, representing these attention dimensions, and diffusion tensor imaging measures of corona radiate subregions were calculated after adjusting for gender and IQ. Significant correlations were found between the focus/execute, sustain, stabilize and shift dimensions and imaging metrics in hypothesized corona radiate subregions. Results suggest that greater microstructural white matter integrity of the corona radiata is partly associated with attention across four attention dimensions. Findings suggest that white matter microstructure of the corona radiata is a neural correlate of several, but not all, attention dimensions. PMID:28090797
Correlates of Incarceration Among Young Methamphetamine Users in Chiang Mai, Thailand
Thomson, Nicholas; Sutcliffe, Catherine G.; Sirirojn, Bangorn; Keawvichit, Rassamee; Wongworapat, Kanlaya; Sintupat, Kamolrawee; Aramrattana, Apinun
2009-01-01
Objectives. We examined correlates of incarceration among young methamphetamine users in Chiang Mai, Thailand in 2005 to 2006. Methods. We conducted a cross-sectional study among 1189 young methamphetamine users. Participants were surveyed about their recent drug use, sexual behaviors, and incarceration. Biological samples were obtained to test for sexually transmitted and viral infections. Results. Twenty-two percent of participants reported ever having been incarcerated. In multivariate analysis, risk behaviors including frequent public drunkenness, starting to use illicit drugs at an early age, involvement in the drug economy, tattooing, injecting drugs, and unprotected sex were correlated with a history of incarceration. HIV, HCV, and herpes simplex virus type 2 (HSV-2) infection were also correlated with incarceration. Conclusions. Incarcerated methamphetamine users are engaging in behaviors and being exposed to environments that put them at increased risk of infection and harmful practices. Alternatives to incarceration need to be explored for youths. PMID:18923109
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokár, K.; Derian, R.; Mitas, L.
Using explicitly correlated fixed-node quantum Monte Carlo and density functional theory (DFT) methods, we study electronic properties, ground-state multiplets, ionization potentials, electron affinities, and low-energy fragmentation channels of charged half-sandwich and multidecker vanadium-benzene systems with up to 3 vanadium atoms, including both anions and cations. It is shown that, particularly in anions, electronic correlations play a crucial role; these effects are not systematically captured with any commonly used DFT functionals such as gradient corrected, hybrids, and range-separated hybrids. On the other hand, tightly bound cations can be described qualitatively by DFT. A comparison of DFT and quantum Monte Carlo providesmore » an in-depth understanding of the electronic structure and properties of these correlated systems. The calculations also serve as a benchmark study of 3d molecular anions that require a balanced many-body description of correlations at both short- and long-range distances.« less
NASA Astrophysics Data System (ADS)
Champagne, Benoı̂t; Botek, Edith; Nakano, Masayoshi; Nitta, Tomoshige; Yamaguchi, Kizashi
2005-03-01
The basis set and electron correlation effects on the static polarizability (α) and second hyperpolarizability (γ) are investigated ab initio for two model open-shell π-conjugated systems, the C5H7 radical and the C6H8 radical cation in their doublet state. Basis set investigations evidence that the linear and nonlinear responses of the radical cation necessitate the use of a less extended basis set than its neutral analog. Indeed, double-zeta-type basis sets supplemented by a set of d polarization functions but no diffuse functions already provide accurate (hyper)polarizabilities for C6H8 whereas diffuse functions are compulsory for C5H7, in particular, p diffuse functions. In addition to the 6-31G*+pd basis set, basis sets resulting from removing not necessary diffuse functions from the augmented correlation consistent polarized valence double zeta basis set have been shown to provide (hyper)polarizability values of similar quality as more extended basis sets such as augmented correlation consistent polarized valence triple zeta and doubly augmented correlation consistent polarized valence double zeta. Using the selected atomic basis sets, the (hyper)polarizabilities of these two model compounds are calculated at different levels of approximation in order to assess the impact of including electron correlation. As a function of the method of calculation antiparallel and parallel variations have been demonstrated for α and γ of the two model compounds, respectively. For the polarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset methods bracket the reference value obtained at the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples level whereas the projected unrestricted second-order Møller-Plesset results are in much closer agreement with the unrestricted coupled cluster singles and doubles with a perturbative inclusion of the triples values than the projected unrestricted Hartree-Fock results. Moreover, the differences between the restricted open-shell Hartree-Fock and restricted open-shell second-order Møller-Plesset methods are small. In what concerns the second hyperpolarizability, the unrestricted Hartree-Fock and unrestricted second-order Møller-Plesset values remain of similar quality while using spin-projected schemes fails for the charged system but performs nicely for the neutral one. The restricted open-shell schemes, and especially the restricted open-shell second-order Møller-Plesset method, provide for both compounds γ values close to the results obtained at the unrestricted coupled cluster level including singles and doubles with a perturbative inclusion of the triples. Thus, to obtain well-converged α and γ values at low-order electron correlation levels, the removal of spin contamination is a necessary but not a sufficient condition. Density-functional theory calculations of α and γ have also been carried out using several exchange-correlation functionals. Those employing hybrid exchange-correlation functionals have been shown to reproduce fairly well the reference coupled cluster polarizability and second hyperpolarizability values. In addition, inclusion of Hartree-Fock exchange is of major importance for determining accurate polarizability whereas for the second hyperpolarizability the gradient corrections are large.
Erchinger, Friedemann; Engjom, Trond; Gudbrandsen, Oddrun Anita; Tjora, Erling; Gilja, Odd H; Dimcevski, Georg
2016-01-01
We have recently evaluated a short endoscopic secretin test for exocrine pancreatic function. Bicarbonate concentration in duodenal juice is an important parameter in this test. Measurement of bicarbonate by back titration as the gold standard method is time consuming, expensive and technically difficult, thus a simplified method is warranted. We aimed to evaluate an automated spectrophotometric method in samples spanning the effective range of bicarbonate concentrations in duodenal juice. We also evaluated if freezing of samples before analyses would affect its results. Patients routinely examined with short endoscopic secretin test suspected to have decreased pancreatic function of various reasons were included. Bicarbonate in duodenal juice was quantified by back titration and automatic spectrophotometry. Both fresh and thawed samples were analysed spectrophotometrically. 177 samples from 71 patients were analysed. Correlation coefficient of all measurements was r = 0.98 (p < 0.001). Correlation coefficient of fresh versus frozen samples conducted with automatic spectrophotometry (n = 25): r = 0.96 (p < 0.001) CONCLUSIONS: The measurement of bicarbonate in fresh and thawed samples by automatic spectrophotometrical analysis correlates excellent with the back titration gold standard. This is a major simplification of direct pancreas function testing, and allows a wider distribution of bicarbonate testing in duodenal juice. Extreme values for Bicarbonate concentration achieved by the autoanalyser method have to be interpreted with caution. Copyright © 2016 IAP and EPC. Published by Elsevier India Pvt Ltd. All rights reserved.
Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323
Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.
Czarnota, Jenna; Gennings, Chris; Wheeler, David C
2015-01-01
In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.
NASA Technical Reports Server (NTRS)
Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.
1989-01-01
Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.
NASA Astrophysics Data System (ADS)
Gaede, Stewart; Carnes, Gregory; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim
2009-01-01
The purpose of this paper is to describe a non-invasive method to monitor the motion of internal organs affected by respiration without using external markers or spirometry, to test the correlation with external markers, and to calculate any time shift between the datasets. Ten lung cancer patients were CT scanned with a GE LightSpeed Plus 4-Slice CT scanner operating in a ciné mode. We retrospectively reconstructed the raw CT data to obtain consecutive 0.5 s reconstructions at 0.1 s intervals to increase image sampling. We defined regions of interest containing tissue interfaces, including tumour/lung interfaces that move due to breathing on multiple axial slices and measured the mean CT number versus respiratory phase. Tumour motion was directly correlated with external marker motion, acquired simultaneously, using the sample coefficient of determination, r2. Only three of the ten patients showed correlation higher than r2 = 0.80 between tumour motion and external marker position. However, after taking into account time shifts (ranging between 0 s and 0.4 s) between the two data sets, all ten patients showed correlation better than r2 = 0.8. This non-invasive method for monitoring the motion of internal organs is an effective tool that can assess the use of external markers for 4D-CT imaging and respiratory-gated radiotherapy on a patient-specific basis.
Gaede, Stewart; Carnes, Gregory; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim
2009-01-21
The purpose of this paper is to describe a non-invasive method to monitor the motion of internal organs affected by respiration without using external markers or spirometry, to test the correlation with external markers, and to calculate any time shift between the datasets. Ten lung cancer patients were CT scanned with a GE LightSpeed Plus 4-Slice CT scanner operating in a ciné mode. We retrospectively reconstructed the raw CT data to obtain consecutive 0.5 s reconstructions at 0.1 s intervals to increase image sampling. We defined regions of interest containing tissue interfaces, including tumour/lung interfaces that move due to breathing on multiple axial slices and measured the mean CT number versus respiratory phase. Tumour motion was directly correlated with external marker motion, acquired simultaneously, using the sample coefficient of determination, r(2). Only three of the ten patients showed correlation higher than r(2) = 0.80 between tumour motion and external marker position. However, after taking into account time shifts (ranging between 0 s and 0.4 s) between the two data sets, all ten patients showed correlation better than r(2) = 0.8. This non-invasive method for monitoring the motion of internal organs is an effective tool that can assess the use of external markers for 4D-CT imaging and respiratory-gated radiotherapy on a patient-specific basis.
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
Long-term follow-up of acute isolated accommodation insufficiency.
Lee, Jung Jin; Baek, Seung-Hee; Kim, Ungsoo Samuel
2013-04-01
To define the long-term results of accommodation insufficiency and to investigate the correlation between accommodation insufficiency and other factors including near point of convergence (NPC), age, and refractive errors. From January 2008 to December 2009, 11 patients with acute near vision disturbance and remote near point of accommodation (NPA) were evaluated. Full ophthalmologic examinations, including best corrected visual acuity, manifest refraction and prism cover tests were performed. Accommodation ability was measured by NPA using the push-up method. We compared accommodation insufficiency and factors including age, refractive errors and NPC. We also investigated the recovery from loss of accommodation in patients. Mean age of patients was 20 years (range, 9 to 34 years). Five of the 11 patients were female. Mean refractive error was -0.6 diopters (range, -3.5 to +0.25 diopters) and 8 of 11 patients (73%) had emmetropia (+0.50 to -0.50 diopters). No abnormalities were found in brain imaging tests. Refractive errors were not correlated with NPA or NPC (rho = 0.148, p = 0.511; rho = 0.319, p = 0.339; respectively). The correlation between age and NPA was not significant (rho = -395, p = 0.069). However, the correlation between age and NPC was negative (rho = -0.508, p = 0.016). Three of 11 patients were lost to follow-up, and 6 of 8 patients had permanent insufficiency of accommodation. Accommodation insufficiency is most common in emmetropia, however, refractive errors and age are not correlated with accommodation insufficiency. Dysfunction of accommodation can be permanent in the isolated accommodation insufficiency.
Tondelli, Manuela; Barbarulo, Anna M.; Vinceti, Giulia; Vincenzi, Chiara; Chiari, Annalisa; Nichelli, Paolo F.; Zamboni, Giovanna
2018-01-01
Patients with Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) may present anosognosia for their cognitive deficits. Three different methods have been usually used to measure anosognosia in patients with AD and MCI, but no studies have established if they share similar neuroanatomical correlates. The purpose of this study was to investigate if anosognosia scores obtained with the three most commonly used methods to assess anosognosia relate to focal atrophy in AD and MCI patients, in order to improve understanding of the neural basis of anosognosia in dementia. Anosognosia was evaluated in 27 patients (15 MCI and 12 AD) through clinical rating (Clinical Insight Rating Scale, CIRS), patient-informant discrepancy (Anosognosia Questionnaire Dementia, AQ-D), and performance discrepancy on different cognitive domains (self-appraisal discrepancies, SADs). Voxel-based morphometry correlational analyses were performed on magnetic resonance imaging (MRI) data with each anosognosia score. Increasing anosognosia on any anosognosia measurement (CIRS, AQ-D, SADs) was associated with increasing gray matter atrophy in the medial temporal lobe including the right hippocampus. Our results support a unitary mechanism of anosognosia in AD and MCI, in which medial temporal lobes play a key role, irrespectively of the assessment method used. This is in accordance with models suggesting that anosognosia in AD is primarily caused by a decline in mnemonic processes. PMID:29867398
Comparison of scoring approaches for the NEI VFQ-25 in low vision.
Dougherty, Bradley E; Bullimore, Mark A
2010-08-01
The aim of this study was to evaluate different approaches to scoring the National Eye Institute Visual Functioning Questionnaire-25 (NEI VFQ-25) in patients with low vision including scoring by the standard method, by Rasch analysis, and by use of an algorithm created by Massof to approximate Rasch person measure. Subscale validity and use of a 7-item short form instrument proposed by Ryan et al. were also investigated. NEI VFQ-25 data from 50 patients with low vision were analyzed using the standard method of summing Likert-type scores and calculating an overall average, Rasch analysis using Winsteps software, and the Massof algorithm in Excel. Correlations between scores were calculated. Rasch person separation reliability and other indicators were calculated to determine the validity of the subscales and of the 7-item instrument. Scores calculated using all three methods were highly correlated, but evidence of floor and ceiling effects was found with the standard scoring method. None of the subscales investigated proved valid. The 7-item instrument showed acceptable person separation reliability and good targeting and item performance. Although standard scores and Rasch scores are highly correlated, Rasch analysis has the advantages of eliminating floor and ceiling effects and producing interval-scaled data. The Massof algorithm for approximation of the Rasch person measure performed well in this group of low-vision patients. The validity of the subscales VFQ-25 should be reconsidered.
Frequency-resolved Monte Carlo.
López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P
2018-05-03
We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.
NASA Astrophysics Data System (ADS)
Bermúdez, Vicente; Pastor, José V.; López, J. Javier; Campos, Daniel
2014-06-01
A study of soot measurement deviation using a diffusion charger sensor with three dilution ratios was conducted in order to obtain an optimum setting that can be used to obtain accurate measurements in terms of soot mass emitted by a light-duty diesel engine under transient operating conditions. The paper includes three experimental phases: an experimental validation of the measurement settings in steady-state operating conditions; evaluation of the proposed setting under the New European Driving Cycle; and a study of correlations for different measurement techniques. These correlations provide a reliable tool for estimating soot emission from light extinction measurement or from accumulation particle mode concentration. There are several methods and correlations to estimate soot concentration in the literature but most of them were assessed for steady-state operating points. In this case, the correlations are obtained by more than 4000 points measured in transient conditions. The results of the new two correlations, with less than 4% deviation from the reference measurement, are presented in this paper.