Sample records for correlated sampling methods

  1. Measures and models for angular correlation and angular-linear correlation. [correlation of random variables

    NASA Technical Reports Server (NTRS)

    Johnson, R. A.; Wehrly, T.

    1976-01-01

    Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.

  2. Estimating population size with correlated sampling unit estimates

    Treesearch

    David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey

    2003-01-01

    Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on mark–recapture or distance sampling methods occur...

  3. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  4. A combined method for correlative 3D imaging of biological samples from macro to nano scale

    NASA Astrophysics Data System (ADS)

    Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Antonopoulos, Georgios C.; Knudsen, Lars; Wrede, Christoph; Izykowski, Nicole; Grothausmann, Roman; Jonigk, Danny; Ochs, Matthias; Ripken, Tammo; Kühnel, Mark P.; Meyer, Heiko

    2016-10-01

    Correlative analysis requires examination of a specimen from macro to nano scale as well as applicability of analytical methods ranging from morphological to molecular. Accomplishing this with one and the same sample is laborious at best, due to deformation and biodegradation during measurements or intermediary preparation steps. Furthermore, data alignment using differing imaging techniques turns out to be a complex task, which considerably complicates the interconnection of results. We present correlative imaging of the accessory rat lung lobe by combining a modified Scanning Laser Optical Tomography (SLOT) setup with a specially developed sample preparation method (CRISTAL). CRISTAL is a resin-based embedding method that optically clears the specimen while allowing sectioning and preventing degradation. We applied and correlated SLOT with Multi Photon Microscopy, histological and immunofluorescence analysis as well as Transmission Electron Microscopy, all in the same sample. Thus, combining CRISTAL with SLOT enables the correlative utilization of a vast variety of imaging techniques.

  5. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  6. Relations between Brain Structure and Attentional Function in Spina Bifida: Utilization of Robust Statistical Approaches

    PubMed Central

    Kulesz, Paulina A.; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M.; Francis, David J.

    2015-01-01

    Objective Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Method Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product moment correlation was compared with four robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator Results All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Conclusions Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PMID:25495830

  7. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  8. Elongation measurement using 1-dimensional image correlation method

    NASA Astrophysics Data System (ADS)

    Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan

    2016-11-01

    Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.

  9. Relations between volumetric measures of brain structure and attentional function in spina bifida: utilization of robust statistical approaches.

    PubMed

    Kulesz, Paulina A; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M; Francis, David J

    2015-03-01

    Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product-moment correlation was compared with 4 robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator. All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  10. Two-dimensional correlation spectroscopy — Biannual survey 2007-2009

    NASA Astrophysics Data System (ADS)

    Noda, Isao

    2010-06-01

    The publication activities in the field of 2D correlation spectroscopy are surveyed with the emphasis on papers published during the last two years. Pertinent review articles and conference proceedings are discussed first, followed by the examination of noteworthy developments in the theory and applications of 2D correlation spectroscopy. Specific topics of interest include Pareto scaling, analysis of randomly sampled spectra, 2D analysis of data obtained under multiple perturbations, evolution of 2D spectra along additional variables, comparison and quantitative analysis of multiple 2D spectra, orthogonal sample design to eliminate interfering cross peaks, quadrature orthogonal signal correction and other data transformation techniques, data pretreatment methods, moving window analysis, extension of kernel and global phase angle analysis, covariance and correlation coefficient mapping, variant forms of sample-sample correlation, and different display methods. Various static and dynamic perturbation methods used in 2D correlation spectroscopy, e.g., temperature, composition, chemical reactions, H/D exchange, physical phenomena like sorption, diffusion and phase transitions, optical and biological processes, are reviewed. Analytical probes used in 2D correlation spectroscopy include IR, Raman, NIR, NMR, X-ray, mass spectrometry, chromatography, and others. Application areas of 2D correlation spectroscopy are diverse, encompassing synthetic and natural polymers, liquid crystals, proteins and peptides, biomaterials, pharmaceuticals, food and agricultural products, solutions, colloids, surfaces, and the like.

  11. Evaluation of home allergen sampling devices.

    PubMed

    Sercombe, J K; Liu-Brennan, D; Garcia, M L; Tovey, E R

    2005-04-01

    Simple, inexpensive methods of sampling from allergen reservoirs are necessary for large-scale studies or low-cost householder-operated allergen measurement. We tested two commercial devices: the Indoor Biotechnologies Mitest Dust Collector and the Drager Bio-Check Allergen Control; two devices of our own design: the Electrostatic Cloth Sampler (ECS) and the Press Tape Sampler (PTS); and a Vacuum Sampler as used in many allergen studies (our Reference Method). Devices were used to collect dust mite allergen samples from 16 domestic carpets. Results were examined for correlations between the sampling methods. With mite allergen concentration expressed as microg/g, the Mitest, the ECS and the PTS correlated with the Reference Method but not with each other. When mite allergen concentration was expressed as microg/m2 the Mitest and the ECS correlated with the Reference Method but the PTS did not. In the high allergen conditions of this study, the Drager Bio-Check did not relate to any methods. The Mitest Dust Collector, the ECS and the PTS show performance consistent with the Reference Method. Many techniques can be used to collect dust mite allergen samples. More investigation is needed to prove any method as superior for estimating allergen exposure.

  12. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    PubMed

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  14. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    PubMed

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  15. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  16. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  17. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method.

    PubMed

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-07-22

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.

  18. Correlative multiple porosimetries for reservoir sandstones with adoption of a new reference-sample-guided computed-tomographic method

    PubMed Central

    Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min

    2016-01-01

    One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105

  19. Equilibrium Molecular Thermodynamics from Kirkwood Sampling

    PubMed Central

    2015-01-01

    We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525

  20. Effect of Malmquist bias on correlation studies with IRAS data base

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  1. Method for measuring recovery of catalytic elements from fuel cells

    DOEpatents

    Shore, Lawrence [Edison, NJ; Matlin, Ramail [Berkeley, NJ

    2011-03-08

    A method is provided for measuring the concentration of a catalytic clement in a fuel cell powder. The method includes depositing on a porous substrate at least one layer of a powder mixture comprising the fuel cell powder and an internal standard material, ablating a sample of the powder mixture using a laser, and vaporizing the sample using an inductively coupled plasma. A normalized concentration of catalytic element in the sample is determined by quantifying the intensity of a first signal correlated to the amount of catalytic element in the sample, quantifying the intensity of a second signal correlated to the amount of internal standard material in the sample, and using a ratio of the first signal intensity to the second signal intensity to cancel out the effects of sample size.

  2. Advancing Research on Racial–Ethnic Health Disparities: Improving Measurement Equivalence in Studies with Diverse Samples

    PubMed Central

    Landrine, Hope; Corral, Irma

    2014-01-01

    To conduct meaningful, epidemiologic research on racial–ethnic health disparities, racial–ethnic samples must be rendered equivalent on other social status and contextual variables via statistical controls of those extraneous factors. The racial–ethnic groups must also be equally familiar with and have similar responses to the methods and measures used to collect health data, must have equal opportunity to participate in the research, and must be equally representative of their respective populations. In the absence of such measurement equivalence, studies of racial–ethnic health disparities are confounded by a plethora of unmeasured, uncontrolled correlates of race–ethnicity. Those correlates render the samples, methods, and measures incomparable across racial–ethnic groups, and diminish the ability to attribute health differences discovered to race–ethnicity vs. to its correlates. This paper reviews the non-equivalent yet normative samples, methodologies and measures used in epidemiologic studies of racial–ethnic health disparities, and provides concrete suggestions for improving sample, method, and scalar measurement equivalence. PMID:25566524

  3. Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation

    NASA Astrophysics Data System (ADS)

    Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.

    2018-01-01

    Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.

  4. Empirical Bayes method for reducing false discovery rates of correlation matrices with block diagonal structure.

    PubMed

    Pacini, Clare; Ajioka, James W; Micklem, Gos

    2017-04-12

    Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.

  5. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  6. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  7. Analyzing Baryon Acoustic Oscillations in Sparse Spectroscopic Samples via Cross-Correlation with Dense Photometry

    NASA Astrophysics Data System (ADS)

    Patej, Anna; Eisenstein, Daniel J.

    2018-04-01

    We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the auto-correlation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.

  8. Comparison of indoor air sampling and dust collection methods for fungal exposure assessment using quantitative PCR.

    PubMed

    Cox, Jennie; Indugula, Reshmi; Vesper, Stephen; Zhu, Zheng; Jandarov, Roman; Reponen, Tiina

    2017-10-18

    Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample collected with a Button™ inhalable aerosol sampler and four types of dust samples: a vacuumed floor dust sample, newly settled dust collected for four weeks onto two types of electrostatic dust cloths (EDCs) in trays, and a wipe sample of dust from above floor surfaces. The samples were obtained in the bedrooms of asthmatic children (n = 14). Quantitative polymerase chain reaction (qPCR) was used to analyze the dust and air samples for the 36 fungal species that make up the Environmental Relative Moldiness Index (ERMI). The results from the samples were compared by four matrices: total concentration of fungal cells, concentration of fungal species associated with indoor environments, concentration of fungal species associated with outdoor environments, and ERMI values (or ERMI-like values for air samples). The ERMI values for the dust samples and the ERMI-like values for the 48 hour air samples were not significantly different. The total cell concentrations of the 36 species obtained with the four dust collection methods correlated significantly (r = 0.64-0.79, p < 0.05), with the exception of the vacuumed floor dust and newly settled dust. In addition, fungal cell concentrations of indoor associated species correlated well between all four dust sampling methods (r = 0.68-0.86, p < 0.01). No correlation was found between the fungal concentrations in the air and dust samples primarily because of differences in concentrations of Cladosporium cladosporioides Type 1 and Epicoccum nigrum. A representative type of dust sample and a 48 hour air sample might both provide useful information about fungal exposures.

  9. Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy

    PubMed Central

    Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei

    2015-01-01

    Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453

  10. Learning Bayesian Networks from Correlated Data

    NASA Astrophysics Data System (ADS)

    Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola

    2016-05-01

    Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.

  11. Kolmogorov-Smirnov test for spatially correlated data

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  12. MRI-determined liver proton density fat fraction, with MRS validation: Comparison of regions of interest sampling methods in patients with type 2 diabetes.

    PubMed

    Vu, Kim-Nhien; Gilbert, Guillaume; Chalut, Marianne; Chagnon, Miguel; Chartrand, Gabriel; Tang, An

    2016-05-01

    To assess the agreement between published magnetic resonance imaging (MRI)-based regions of interest (ROI) sampling methods using liver mean proton density fat fraction (PDFF) as the reference standard. This retrospective, internal review board-approved study was conducted in 35 patients with type 2 diabetes. Liver PDFF was measured by magnetic resonance spectroscopy (MRS) using a stimulated-echo acquisition mode sequence and MRI using a multiecho spoiled gradient-recalled echo sequence at 3.0T. ROI sampling methods reported in the literature were reproduced and liver mean PDFF obtained by whole-liver segmentation was used as the reference standard. Intraclass correlation coefficients (ICCs), Bland-Altman analysis, repeated-measures analysis of variance (ANOVA), and paired t-tests were performed. ICC between MRS and MRI-PDFF was 0.916. Bland-Altman analysis showed excellent intermethod agreement with a bias of -1.5 ± 2.8%. The repeated-measures ANOVA found no systematic variation of PDFF among the nine liver segments. The correlation between liver mean PDFF and ROI sampling methods was very good to excellent (0.873 to 0.975). Paired t-tests revealed significant differences (P < 0.05) with ROI sampling methods that exclusively or predominantly sampled the right lobe. Significant correlations with mean PDFF were found with sampling methods that included higher number of segments, total area equal or larger than 5 cm(2) , or sampled both lobes (P = 0.001, 0.023, and 0.002, respectively). MRI-PDFF quantification methods should sample each liver segment in both lobes and include a total surface area equal or larger than 5 cm(2) to provide a close estimate of the liver mean PDFF. © 2015 Wiley Periodicals, Inc.

  13. Preliminary study of radioactive limonite localities in Colorado, Utah, and Wyoming

    USGS Publications Warehouse

    Lovering, T.G.; Beroni, E.P.

    1956-01-01

    Nine radioactive limonite localities of different types were sampled during the spring and fall of 1953 in an effort to establish criteria for differentiating limonite outcrops associated with uranium or thorium deposits from limonite outcrops not associated with such deposits. The samples were analyzed for uranium and thorium by standard chemical methods, for equivalent uranium by the radiometric method, and for a number of common metals by semiquantitative geochemical methods. Correlation coefficients were then calculated for each of the metals with respect to equivalent uranium, and to uranium where present, for all of the samples from each locality. The correlation coefficients may indicate a significant association between uranium or thorium and certain metals. Occurrences of specific that are interpreted as significant very considerably for different uranium localities but are more consistent for the thorium localities. Samples taken from radioactive outcrops in the vicinity of uranium or thorium deposits can be quickly analyzed by geochemical methods for various elements. Correlation coefficients can then be determined for the various elements with respect to uranium or thorium; if any significant correlations are obtained, the elements showing such correlation may be indicators of uranium or thorium. Soil samples of covered areas in the vicinity of the radioactive outcrop may then be analyzed for the indicator elements and any resulting anomalies used as a guide for prospecting where the depth of overburden is too great to allow the use of radiation-detecting instruments. Correlation coefficients of the associated indicator elements, used in conjunction with petrographic evidence, may also be useful in interpreting the origin and paragenesis of radioactive deposits. Changes in color of limonite stains on the outcrop may also be a useful guide to ore in some areas.

  14. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  15. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. I Environmental DNA sampling is more sensitive than a traditional survey technique for detecting an aquatic invader.

    PubMed

    Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A

    2015-10-01

    Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.

  17. Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking

    PubMed Central

    Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun

    2017-01-01

    Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876

  18. Sample size determination for mediation analysis of longitudinal data.

    PubMed

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  19. Cross-Informant Symptoms from CBCL, TRF, and YSR : Trait and Method Variance in a Normative Sample of Russian Youths

    ERIC Educational Resources Information Center

    Grigorenko, Elena L.; Geiser, Christian; Slobodskaya, Helena R.; Francis, David J.

    2010-01-01

    A large community-based sample of Russian youths (n = 841, age M = 13.17 years, SD = 2.51) was assessed with the Child Behavior Checklist (mothers and fathers separately), Teacher's Report Form, and Youth Self-Report. The multiple indicator-version of the correlated trait-correlated method minus one, or CT-C(M-1), model was applied to analyze (a)…

  20. High-resolution correlation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.

    2007-09-01

    In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.

  1. Daily sodium and potassium excretion can be estimated by scheduled spot urine collections.

    PubMed

    Doenyas-Barak, Keren; Beberashvili, Ilia; Bar-Chaim, Adina; Averbukh, Zhan; Vogel, Ofir; Efrati, Shai

    2015-01-01

    The evaluation of sodium and potassium intake is part of the optimal management of hypertension, metabolic syndrome, renal stones, and other conditions. To date, no convenient method for its evaluation exists, as the gold standard method of 24-hour urine collection is cumbersome and often incorrectly performed, and methods that use spot or shorter collections are not accurate enough to replace the gold standard. The aim of this study was to evaluate the correlation and agreement between a new method that uses multiple-scheduled spot urine collection and the gold standard method of 24-hour urine collection. The urine sodium or potassium to creatinine ratios were determined for four scheduled spot urine samples. The mean ratios of the four spot samples and the ratios of each of the single spot samples were corrected for estimated creatinine excretion and compared to the gold standard. A significant linear correlation was demonstrated between the 24-hour urinary solute excretions and estimated excretion evaluated by any of the scheduled spot urine samples. The correlation of the mean of the four spots was better than for any of the single spots. Bland-Altman plots showed that the differences between these measurements were within the limits of agreement. Four scheduled spot urine samples can be used as a convenient method for estimation of 24-hour sodium or potassium excretion. © 2015 S. Karger AG, Basel.

  2. Evaluation of the performance of a point-of-care method for total and differential white blood cell count in clozapine users.

    PubMed

    Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H

    2016-12-01

    We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.

  3. A computer program to obtain time-correlated gust loads for nonlinear aircraft using the matched-filter-based method

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1994-01-01

    NASA Langley Research Center has, for several years, conducted research in the area of time-correlated gust loads for linear and nonlinear aircraft. The results of this work led NASA to recommend that the Matched-Filter-Based One-Dimensional Search Method be used for gust load analyses of nonlinear aircraft. This manual describes this method, describes a FORTRAN code which performs this method, and presents example calculations for a sample nonlinear aircraft model. The name of the code is MFD1DS (Matched-Filter-Based One-Dimensional Search). The program source code, the example aircraft equations of motion, a sample input file, and a sample program output are all listed in the appendices.

  4. Chemical profiling and adulteration screening of Aquilariae Lignum Resinatum by Fourier transform infrared (FT-IR) spectroscopy and two-dimensional correlation infrared (2D-IR) spectroscopy.

    PubMed

    Qu, Lei; Chen, Jian-Bo; Zhang, Gui-Jun; Sun, Su-Qin; Zheng, Jing

    2017-03-05

    As a kind of expensive perfume and valuable herb, Aquilariae Lignum Resinatum (ALR) is often adulterated for economic motivations. In this research, Fourier transform infrared (FT-IR) spectroscopy is employed to establish a simple and quick method for the adulteration screening of ALR. First, the principal chemical constituents of ALR are characterized by FT-IR spectroscopy at room temperature and two-dimensional correlation infrared (2D-IR) spectroscopy with thermal perturbation. Besides the common cellulose and lignin compounds, a certain amount of resin is the characteristic constituent of ALR. Synchronous and asynchronous 2D-IR spectra indicate that the resin (an unstable secondary metabolite) is more sensitive than cellulose and lignin (stable structural constituents) to the thermal perturbation. Using a certified ALR sample as the reference, the infrared spectral correlation threshold is determined by 30 authentic samples and 6 adulterated samples. The spectral correlation coefficient of an authentic ALR sample to the standard reference should be not less than 0.9886 (p=0.01). Three commercial adulterated ALR samples are identified by the correlation threshold. Further interpretation of the infrared spectra of the adulterated samples indicates the common adulterating methods - counterfeiting with other kind of wood, adding ingredient such as sand to increase the weight, and adding the cheap resin such as rosin to increase the content of resin compounds. Results of this research prove that FT-IR spectroscopy can be used as a simple and accurate quality control method of ALR. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Chemical profiling and adulteration screening of Aquilariae Lignum Resinatum by Fourier transform infrared (FT-IR) spectroscopy and two-dimensional correlation infrared (2D-IR) spectroscopy

    NASA Astrophysics Data System (ADS)

    Qu, Lei; Chen, Jian-bo; Zhang, Gui-Jun; Sun, Su-qin; Zheng, Jing

    2017-03-01

    As a kind of expensive perfume and valuable herb, Aquilariae Lignum Resinatum (ALR) is often adulterated for economic motivations. In this research, Fourier transform infrared (FT-IR) spectroscopy is employed to establish a simple and quick method for the adulteration screening of ALR. First, the principal chemical constituents of ALR are characterized by FT-IR spectroscopy at room temperature and two-dimensional correlation infrared (2D-IR) spectroscopy with thermal perturbation. Besides the common cellulose and lignin compounds, a certain amount of resin is the characteristic constituent of ALR. Synchronous and asynchronous 2D-IR spectra indicate that the resin (an unstable secondary metabolite) is more sensitive than cellulose and lignin (stable structural constituents) to the thermal perturbation. Using a certified ALR sample as the reference, the infrared spectral correlation threshold is determined by 30 authentic samples and 6 adulterated samples. The spectral correlation coefficient of an authentic ALR sample to the standard reference should be not less than 0.9886 (p = 0.01). Three commercial adulterated ALR samples are identified by the correlation threshold. Further interpretation of the infrared spectra of the adulterated samples indicates the common adulterating methods - counterfeiting with other kind of wood, adding ingredient such as sand to increase the weight, and adding the cheap resin such as rosin to increase the content of resin compounds. Results of this research prove that FT-IR spectroscopy can be used as a simple and accurate quality control method of ALR.

  6. Evaluation of magnetic nanoparticle samples made from biocompatible ferucarbotran by time-correlation magnetic particle imaging reconstruction method

    PubMed Central

    2013-01-01

    Background Molecular imaging using magnetic nanoparticles (MNPs)—magnetic particle imaging (MPI)—has attracted interest for the early diagnosis of cancer and cardiovascular disease. However, because a steep local magnetic field distribution is required to obtain a defined image, sophisticated hardware is required. Therefore, it is desirable to realize excellent image quality even with low-performance hardware. In this study, the spatial resolution of MPI was evaluated using an image reconstruction method based on the correlation information of the magnetization signal in a time domain and by applying MNP samples made from biocompatible ferucarbotran that have adjusted particle diameters. Methods The magnetization characteristics and particle diameters of four types of MNP samples made from ferucarbotran were evaluated. A numerical analysis based on our proposed method that calculates the image intensity from correlation information between the magnetization signal generated from MNPs and the system function was attempted, and the obtained image quality was compared with that using the prototype in terms of image resolution and image artifacts. Results MNP samples obtained by adjusting ferucarbotran showed superior properties to conventional ferucarbotran samples, and numerical analysis showed that the same image quality could be obtained using a gradient magnetic field generator with 0.6 times the performance. However, because image blurring was included theoretically by the proposed method, an algorithm will be required to improve performance. Conclusions MNP samples obtained by adjusting ferucarbotran showed magnetizing properties superior to conventional ferucarbotran samples, and by using such samples, comparable image quality (spatial resolution) could be obtained with a lower gradient magnetic field intensity. PMID:23734917

  7. Chemical compositions, chromatographic fingerprints and antioxidant activities of Andrographis Herba.

    PubMed

    Zhao, Yang; Kao, Chun-Pin; Wu, Kun-Chang; Liao, Chi-Ren; Ho, Yu-Ling; Chang, Yuan-Shiun

    2014-11-10

    This paper describes the development of an HPLC-UV-MS method for quantitative determination of andrographolide and dehydroandrographolide in Andrographis Herba and establishment of its chromatographic fingerprint. The method was validated for linearity, limit of detection and quantification, inter- and intra-day precisions, repeatability, stability and recovery. All the validation results of quantitative determination and fingerprinting methods were satisfactory. The developed method was then applied to assay the contents of andrographolide and dehydroandrographolide and to acquire the fingerprints of all the collected Andrographis Herba samples. Furthermore, similarity analysis and principal component analysis were used to reveal the similarities and differences between the samples on the basis of the characteristic peaks. More importantly, the DPPH free radical-scavenging and ferric reducing capacities of the Andrographis Herba samples were assayed. By bivariate correlation analysis, we found that six compounds are positively correlated to DPPH free radical scavenging and ferric reducing capacities, and four compounds are negatively correlated to DPPH free radical scavenging and ferric reducing capacities.

  8. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  9. Estimation of Rank Correlation for Clustered Data

    PubMed Central

    Rosner, Bernard; Glynn, Robert

    2017-01-01

    It is well known that the sample correlation coefficient (Rxy) is the maximum likelihood estimator (MLE) of the Pearson correlation (ρxy) for i.i.d. bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the MLE of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (a) converting ranks of both X and Y to the probit scale, (b) estimating the Pearson correlation between probit scores for X and Y, and (c) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. PMID:28399615

  10. Monitoring the chemical production of citrus-derived bioactive 5-demethylnobiletin using surface enhanced Raman spectroscopy

    PubMed Central

    Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili

    2013-01-01

    To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986

  11. Do sampling methods differ in their utility for ecological monitoring? Comparison of line-point intercept, grid-point intercept, and ocular estimate methods

    USDA-ARS?s Scientific Manuscript database

    This study compared the utility of three sampling methods for ecological monitoring based on: interchangeability of data (rank correlations), precision (coefficient of variation), cost (minutes/transect), and potential of each method to generate multiple indicators. Species richness and foliar cover...

  12. Automated spectrophotometric bicarbonate analysis in duodenal juice compared to the back titration method.

    PubMed

    Erchinger, Friedemann; Engjom, Trond; Gudbrandsen, Oddrun Anita; Tjora, Erling; Gilja, Odd H; Dimcevski, Georg

    2016-01-01

    We have recently evaluated a short endoscopic secretin test for exocrine pancreatic function. Bicarbonate concentration in duodenal juice is an important parameter in this test. Measurement of bicarbonate by back titration as the gold standard method is time consuming, expensive and technically difficult, thus a simplified method is warranted. We aimed to evaluate an automated spectrophotometric method in samples spanning the effective range of bicarbonate concentrations in duodenal juice. We also evaluated if freezing of samples before analyses would affect its results. Patients routinely examined with short endoscopic secretin test suspected to have decreased pancreatic function of various reasons were included. Bicarbonate in duodenal juice was quantified by back titration and automatic spectrophotometry. Both fresh and thawed samples were analysed spectrophotometrically. 177 samples from 71 patients were analysed. Correlation coefficient of all measurements was r = 0.98 (p < 0.001). Correlation coefficient of fresh versus frozen samples conducted with automatic spectrophotometry (n = 25): r = 0.96 (p < 0.001) CONCLUSIONS: The measurement of bicarbonate in fresh and thawed samples by automatic spectrophotometrical analysis correlates excellent with the back titration gold standard. This is a major simplification of direct pancreas function testing, and allows a wider distribution of bicarbonate testing in duodenal juice. Extreme values for Bicarbonate concentration achieved by the autoanalyser method have to be interpreted with caution. Copyright © 2016 IAP and EPC. Published by Elsevier India Pvt Ltd. All rights reserved.

  13. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  14. Validation of a quantitative Eimeria spp. PCR for fresh droppings of broiler chickens.

    PubMed

    Peek, H W; Ter Veen, C; Dijkman, R; Landman, W J M

    2017-12-01

    A quantitative Polymerase Chain Reaction (qPCR) for the seven chicken Eimeria spp. was modified and validated for direct use on fresh droppings. The analytical specificity of the qPCR on droppings was 100%. Its analytical sensitivity (non-sporulated oocysts/g droppings) was 41 for E. acervulina, ≤2900 for E. brunetti, 710 for E. praecox, 1500 for E. necatrix, 190 for E. tenella, 640 for E. maxima, and 1100 for E. mitis. Field validation of the qPCR was done using droppings with non-sporulated oocysts from 19 broiler flocks. To reduce the number of qPCR tests five grams of each pooled sample (consisting of ten fresh droppings) per time point were blended into one mixed sample. Comparison of the oocysts per gram (OPG)-counting method with the qPCR using pooled samples (n = 1180) yielded a Pearson's correlation coefficient of 0.78 (95% CI: 0.76-0.80) and a Pearson's correlation coefficient of 0.76 (95% CI: 0.70-0.81) using mixed samples (n = 236). Comparison of the average of the OPG-counts of the five pooled samples with the mixed sample per time point (n = 236) showed a Pearson's correlation coefficient (R) of 0.94 (95% CI: 0.92-0.95) for the OPG-counting method and 0.87 (95% CI: 0.84-0.90) for the qPCR. This indicates that mixed samples are practically equivalent to the mean of five pooled samples. The good correlation between the OPG-counting method and the qPCR was further confirmed by the visual agreement between the total oocyst/g shedding patterns measured with both techniques in the 19 broiler flocks using the mixed samples.

  15. Correlations of Apparent Cellulose Crystallinity Determined by XRD, NMR, IR, Raman, and SFG Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, David K; Lee, Christopher; Dazen, Kevin

    2015-07-04

    Although the cellulose crystallinity index (CI) is used widely, its limitations have not been adequately described. In this study, the CI values of a set of reference samples were determined from X-ray diffraction (XRD), nuclear magnetic resonance (NMR), and infrared (IR), Raman, and vibrational sum frequency generation (SFG) spectroscopies. The intensities of certain crystalline peaks in IR, Raman, and SFG spectra positively correlated with the amount of crystalline cellulose in the sample, but the correlation with XRD was nonlinear as a result of fundamental differences in detection sensitivity to crystalline cellulose and improper baseline corrections for amorphous contributions. It ismore » demonstrated that the intensity and shape of the XRD signal is affected by both the amount of crystalline cellulose and crystal size, which makes XRD analysis complicated. It is clear that the methods investigated show the same qualitative trends for samples, but the absolute CI values differ depending on the determination method. This clearly indicates that the CI, as estimated by different methods, is not an absolute value and that for a given set of samples the CI values can be compared only as a qualitative measure.« less

  16. Correlations of Apparent Cellulose Crystallinity Determined by XRD, NMR, IR, Raman, and SFG Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Christopher M; Dazen, Kevin; Kafle, Kabindra

    2015-01-01

    Although the cellulose crystallinity index (CI) is used widely, its limitations have not been adequately described. In this study, the CI values of a set of reference samples were determined from X-ray diffraction (XRD), nuclear magnetic resonance (NMR), and infrared (IR), Raman, and vibrational sum frequency generation (SFG) spectroscopies. The intensities of certain crystalline peaks in IR, Raman, and SFG spectra positively correlated with the amount of crystalline cellulose in the sample, but the correlation with XRD was nonlinear as a result of fundamental differences in detection sensitivity to crystalline cellulose and improper baseline corrections for amorphous contributions. It ismore » demonstrated that the intensity and shape of the XRD signal is affected by both the amount of crystalline cellulose and crystal size, which makes XRD analysis complicated. It is clear that the methods investigated show the same qualitative trends for samples, but the absolute CI values differ depending on the determination method. This clearly indicates that the CI, as estimated by different methods, is not an absolute value and that for a given set of samples the CI values can be compared only as a qualitative measure.« less

  17. Analysing baryon acoustic oscillations in sparse spectroscopic samples via cross-correlation with dense photometry

    NASA Astrophysics Data System (ADS)

    Patej, A.; Eisenstein, D. J.

    2018-07-01

    We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the autocorrelation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high-quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.

  18. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A comparison of macroinvertebrate and habitat methods of data collection in the Little Colorado River Watershed, Arizona 2007

    USGS Publications Warehouse

    Spindler, Patrice; Paretti, Nick V.

    2007-01-01

    The Arizona Department of Environmental Quality (ADEQ) and the U.S. Environmental Protection Agency (USEPA) Ecological Monitoring and Assessment Program (EMAP), use different field methods for collecting macroinvertebrate samples and habitat data for bioassessment purposes. Arizona’s Biocriteria index was developed using a riffle habitat sampling methodology, whereas the EMAP method employs a multi-habitat sampling protocol. There was a need to demonstrate comparability of these different bioassessment methodologies to allow use of the EMAP multi-habitat protocol for both statewide probabilistic assessments for integration of the EMAP data into the national (305b) assessment and for targeted in-state bioassessments for 303d determinations of standards violations and impaired aquatic life conditions. The purpose of this study was to evaluate whether the two methods yield similar bioassessment results, such that the data could be used interchangeably in water quality assessments. In this Regional EMAP grant funded project, a probabilistic survey of 30 sites in the Little Colorado River basin was conducted in the spring of 2007. Macroinvertebrate and habitat data were collected using both ADEQ and EMAP sampling methods, from adjacent reaches within these stream channels.


    All analyses indicated that the two macroinvertebrate sampling methods were significantly correlated. ADEQ and EMAP samples were classified into the same scoring categories (meeting, inconclusive, violating the biocriteria standard) 82% of the time. When the ADEQ-IBI was applied to both the ADEQ and EMAP taxa lists, the resulting IBI scores were significantly correlated (r=0.91), even though only 4 of the 7 metrics in the IBI were significantly correlated. The IBI scores from both methods were significantly correlated to the percent of riffle habitat, even though the average percent riffle habitat was only 30% of the stream reach. Multivariate analyses found that the percent riffle was an important attribute for both datasets in classifying IBI scores into assessment categories.


    Habitat measurements generated from EMAP and ADEQ methods were also significantly correlated; 13 of 16 habitat measures were significantly correlated (p<0.01). The visual-based percentage estimates of percent riffle and pool habitats, vegetative cover and percent canopy cover, and substrate measurements of percent fine substrate and embeddedness were all remarkably similar, given the different field methods used. A multivariate analysis identified substrate and flow conditions, as well as canopy cover as important combinations of habitat attributes affecting both IBI scores. These results indicate that similar habitat measures can be obtained using two different field sampling protocols. In addition, similar combinations of these habitat parameters were important to macroinvertebrate community condition in multivariate analyses of both ADEQ and EMAP datasets.


    These results indicate the two sampling methods for macroinvertebrates and habitat data were very similar in terms of bioassessment results and stressors. While the bioassessment category was not identical for all sites, overall the assessments were significantly correlated, providing similar bioassessment results for the cold water streams used in this study. The findings of this study indicate that ADEQ can utilize either a riffle-based sampling methodology or a multi-habitat sampling approach in cold water streams as both yield similar results relative to the macroinvertebrate assemblage. These results will allow for use of either macroinvertebrate dataset to determine water quality standards compliance with the ADEQ Indexes of Biological Integrity, for which threshold values were just recently placed into the Arizona Surface Water Quality Standards. While this survey did not include warm water desert streams of Arizona, we would predict that EMAP and ADEQ sampling methodologies would provide similar bioassessment results and would not be significantly different, as we have found that the percent riffle habitat in cold and warm water perennial, wadeable streams is not significantly different. However, a comparison study of sampling methodologies in warm water streams should be conducted to confirm the predicted similarity of bioassessment results. ADEQ will continue to implement a monitoring strategy that includes probabilistic monitoring for a statewide ecological assessment of stream conditions. Conclusions from this study will guide decisions regarding the most appropriate sampling methods for future probabilistic monitoring sample plans.

  20. Comparison of mucosal lining fluid sampling methods and influenza-specific IgA detection assays for use in human studies of influenza immunity.

    PubMed

    de Silva, Thushan I; Gould, Victoria; Mohammed, Nuredin I; Cope, Alethea; Meijer, Adam; Zutt, Ilse; Reimerink, Johan; Kampmann, Beate; Hoschler, Katja; Zambon, Maria; Tregoning, John S

    2017-10-01

    We need greater understanding of the mechanisms underlying protection against influenza virus to develop more effective vaccines. To do this, we need better, more reproducible methods of sampling the nasal mucosa. The aim of the current study was to compare levels of influenza virus A subtype-specific IgA collected using three different methods of nasal sampling. Samples were collected from healthy adult volunteers before and after LAIV immunization by nasal wash, flocked swabs and Synthetic Absorptive Matrix (SAM) strips. Influenza A virus subtype-specific IgA levels were measured by haemagglutinin binding ELISA or haemagglutinin binding microarray and the functional response was assessed by microneutralization. Nasosorption using SAM strips lead to the recovery of a more concentrated sample of material, with a significantly higher level of total and influenza H1-specific IgA. However, an equivalent percentage of specific IgA was observed with all sampling methods when normalized to the total IgA. Responses measured using a recently developed antibody microarray platform, which allows evaluation of binding to multiple influenza strains simultaneously with small sample volumes, were compared to ELISA. There was a good correlation between ELISA and microarray values. Material recovered from SAM strips was weakly neutralizing when used in an in vitro assay, with a modest correlation between the level of IgA measured by ELISA and neutralization, but a greater correlation between microarray-measured IgA and neutralizing activity. In conclusion we have tested three different methods of nasal sampling and show that flocked swabs and novel SAM strips are appropriate alternatives to traditional nasal washes for assessment of mucosal influenza humoral immunity. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Capillary whole blood testing by a new portable monitor. Comparison with standard determination of the international normalized ratio.

    PubMed

    de Miguel, Dunia; Burgaleta, Carmen; Reyes, Eduardo; Pascual, Teresa

    2003-07-01

    We evaluated a new portable monitor (AvoSure PT PRO, Menarini Diagnostics, Firenze, Italy) developed to test the prothrombin time in capillary blood and plasma by comparing it with the standard laboratory determination. We studied 62 patients receiving acenocoumarol therapy. The international normalized ratio (INR) in capillary blood was analyzed by 2 methods: AvoSure PT PRO and Thrombotrack Nycomed Analyzer (Axis-Shield, Dundee, Scotland). Parallel studies were performed in plasma samples by a reference method using the Behring Coagulation Timer (Behring Diagnostics, Marburg, Germany). Plasma samples also were tested with the AvoSure PT PRO. Correlation was good for INR values for capillary blood and plasma samples by AvoSure PT PRO and our reference method (R2 = 0.8596) and for capillary blood samples tested by the AvoSure PT PRO and Thrombotrack Nycomed Analyzer (R2 = 0.8875). The correlation for INR in capillary blood and plasma samples by AvoSure PT PRO was 0.6939 (P < .0004). Capillary blood determinations are rapid and effective for monitoring oral anticoagulation therapy and have a high correlation to plasma determinations. AvoSure PT PRO is accurate for controlling INR in plasma and capillary blood samples, may be used in outpatient clinics, and has advantages over previous portable monitors.

  2. ADHD and Method Variance: A Latent Variable Approach Applied to a Nationally Representative Sample of College Freshmen

    ERIC Educational Resources Information Center

    Konold, Timothy R.; Glutting, Joseph J.

    2008-01-01

    This study employed a correlated trait-correlated method application of confirmatory factor analysis to disentangle trait and method variance from measures of attention-deficit/hyperactivity disorder obtained at the college level. The two trait factors were "Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition" ("DSM-IV")…

  3. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  4. Treatment of Nuclear Data Covariance Information in Sample Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

  5. Automated measurement of carbohydrate-deficient transferrin using the Bio-Rad %CDT by the HPLC test on a Variant HPLC system: evaluation and comparison with other routine procedures.

    PubMed

    Schellenberg, François; Mennetrey, Louise; Girre, Catherine; Nalpas, Bertrand; Pagès, Jean Christophe

    2008-01-01

    In this study, we evaluated the new %CDT by the HPLC method (Bio-Rad, Germany) on a Varianttrade mark HPLC system (Bio-Rad), checked the correlation with well-known methods and calculated the diagnostic value of the test. Intra-run and day-to-day precision values were calculated for samples with extreme serum transferrin concentrations, high trisialotransferrin and interfering conditions (haemolysed, lactescent and icteric samples). The method was compared with two routine procedures, the %CDT TIA (Bio-Rad, Hercules, CA, USA) and the Capillarystrade mark CDT (Sebia, France). A total of 350 clinical sera samples were used for a case-control study. Precision values were better in high CDT and medium CDT pools than in low CDT pools. The serum transferrin concentration had no effect on CDT measurement, except in samples with serum transferrin <1 g/L. Haemolysis was the only interfering situation. The method showed high correlation (r(2) > 0.95) with the two other methods (%CDT TIA and CZE %CDT). The global predictive value of the test was >0.90 at 1.9% cut-off. These results demonstrate that the %CDT by the HPLC test is suitable for CDT routine measurement; the results from the high-throughput Varianttrade mark system are well correlated with other methods and are of high diagnostic value.

  6. Estimation of rank correlation for clustered data.

    PubMed

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Noise-immune complex correlation for vasculature imaging based on standard and Jones-matrix optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Li, En; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    A new optical coherence angiography (OCA) method, called correlation mapping OCA (cmOCA), is presented by using the SNR-corrected complex correlation. An SNR-correction theory for the complex correlation calculation is presented. The method also integrates a motion-artifact-removal method for the sample motion induced decorrelation artifact. The theory is further extended to compute more reliable correlation by using multi- channel OCT systems, such as Jones-matrix OCT. The high contrast vasculature imaging of in vivo human posterior eye has been obtained. Composite imaging of cmOCA and degree of polarization uniformity indicates abnormalities of vasculature and pigmented tissues simultaneously.

  8. Estimating correlation of prevalence at two locations in the farm-to-table continuum using qualitative test data.

    PubMed

    Williams, Michael S; Ebel, Eric D

    2017-03-20

    The presence or absence of contaminants in food samples changes as a commodity moves along the farm-to-table continuum. Interest lies in the degree to which the prevalence (i.e., infected animals or contaminated sample units) at one location in the continuum, as measured by the proportion of test-positive samples, is correlated with the prevalence at a location later in the continuum. If prevalence of a contaminant at one location in the continuum is strongly correlated with the prevalence of the contaminant later in the continuum, then the effect of changes in contamination on overall food safety can be better understood. Pearson's correlation coefficient is one of the simplest metrics of association between two measurements of prevalence but it is biased when data consisting of presence/absence testing results are used to directly estimate the correlation. This study demonstrates the potential magnitude of this bias and explores the utility of three methods for unbiased estimation of the degree of correlation in prevalence. An example, based on testing broiler chicken carcasses for Salmonella at re-hang and post-chill, is used to demonstrate the methods. Published by Elsevier B.V.

  9. Observed intra-cluster correlation coefficients in a cluster survey sample of patient encounters in general practice in Australia

    PubMed Central

    Knox, Stephanie A; Chondros, Patty

    2004-01-01

    Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248

  10. Time-Domain Nuclear Magnetic Resonance (TD-NMR) and Chemometrics for Determination of Fat Content in Commercial Products of Milk Powder.

    PubMed

    Nascimento, Paloma Andrade Martins; Barsanelli, Paulo Lopes; Rebellato, Ana Paula; Pallone, Juliana Azevedo Lima; Colnago, Luiz Alberto; Pereira, Fabíola Manhas Verbi

    2017-03-01

    This study shows the use of time-domain (TD)-NMR transverse relaxation (T2) data and chemometrics in the nondestructive determination of fat content for powdered food samples such as commercial dried milk products. Most proposed NMR spectroscopy methods for measuring fat content correlate free induction decay or echo intensities with the sample's mass. The need for the sample's mass limits the analytical frequency of NMR determination, because weighing the samples is an additional step in this procedure. Therefore, the method proposed here is based on a multivariate model of T2 decay, measured with Carr-Purcell-Meiboom-Gill pulse sequence and reference values of fat content. The TD-NMR spectroscopy method shows high correlation (r = 0.95) with the lipid content, determined by the standard extraction method of Bligh and Dyer. For comparison, fat content determination was also performed using a multivariate model with near-IR (NIR) spectroscopy, which is also a nondestructive method. The advantages of the proposed TD-NMR method are that it (1) minimizes toxic residue generation, (2) performs measurements with high analytical frequency (a few seconds per analysis), and (3) does not require sample preparation (such as pelleting, needed for NIR spectroscopy analyses) or weighing the samples.

  11. Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells

    NASA Astrophysics Data System (ADS)

    Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke

    2011-06-01

    Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.

  12. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  13. Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods

    NASA Astrophysics Data System (ADS)

    Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.

    2008-12-01

    Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.

  14. Fast, exact k-space sample density compensation for trajectories composed of rotationally symmetric segments, and the SNR-optimized image reconstruction from non-Cartesian samples.

    PubMed

    Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J

    2008-08-01

    A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods. (c) 2008 Wiley-Liss, Inc.

  15. Background concentrations of metals in soils from selected regions in the State of Washington

    USGS Publications Warehouse

    Ames, K.C.; Prych, E.A.

    1995-01-01

    Soil samples from 60 sites in the State of Washington were collected and analyzed to determine the magnitude and variability of background concen- trations of metals in soils of the State. Samples were collected in areas that were relatively undisturbed by human activity from the most pre- dominant soils in 12 different regions that are representative of large areas of Washington State. Concentrations of metals were determined by five different laboratory methods. Concentrations of mercury and nickel determined by both the total and total-recoverable methods displayed the greatest variability, followed by chromium and copper determined by the total-recoverable method. Concentrations of other metals, such as aluminum and barium determined by the total method, varied less. Most metals concentrations were found to be more nearly log-normally than normally distributed. Total metals concentrations were not significantly different among the different regions. However, total-recoverable metals concentrations were not as similar among different regions. Cluster analysis revealed that sampling sites in three regions encompassing the Puget Sound could be regrouped to form two new regions and sites in three regions in south-central and southeastern Washington State could also be regrouped into two new regions. Concentrations for 7 of 11 total-recoverable metals correlated with total metals concentrations. Concen- trations of six total metals also correlated positively with organic carbon. Total-recoverable metals concentrations did not correlate with either organic carbon or particle size. Concentrations of metals determined by the leaching methods did not correlate with total or total-recoverable metals concentrations, nor did they correlate with organic carbon or particle size.

  16. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  18. Efficacy of the detection of Legionella in hot and cold water samples by culture and PCR. I. Standardization of methods.

    PubMed

    Wójcik-Fatla, Angelina; Stojek, Nimfa Maria; Dutkiewicz, Jacek

    2012-01-01

    The aim of the present study was: - to compare methods for concentration and isolation of Legionella DNA from water; - to examine the efficacy of various modifications of PCR test (PCR, semi-nested PCR, and real-time PCR) for the detection of known numbers of Legionella pneumophila in water samples artificially contaminated with the strain of this bacterium and in randomly selected samples of environmental water, in parallel with examination by culture. It was found that filtration is much more effective than centrifugation for the concentration of DNA in water samples, and that the Qiamp DNA Mini-Kit is the most efficient for isolation of Legionella DNA from water. The semi-nested PCR and real-time PCR proved to be the most sensitive methods for detection of Legionella DNA in water samples. Both PCR modifications showed a high correlation with recovery of Legionella by culture (p<0.01), while no correlation occurred between the results of one-stage PCR and culture (p>0.1).

  19. Photoacoustic spectroscopy and thermal relaxation method to evaluate corn moisture content

    NASA Astrophysics Data System (ADS)

    Pedrochi, F.; Medina, A. N.; Bento, A. C.; Baesso, M. L.; Luz, M. L. S.; Dalpasquale, V. A.

    2005-06-01

    In this study, samples of popcorn with different degrees of moisture were analyzed. The optical absorption bands at the mid infrared were measured using photoacoustic spectroscopy and were correlated to the sample moisture. The results were in agreement with moisture data determined by the well known reference method, the Karl Fischer. In addition, the thermal relaxation method was used to determine the sample specific heat as a function of the moisture content. The results were also in agreement with the two mentioned methods.

  20. Determining the 40K radioactivity in rocks using x-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Pilakouta, M.; Kallithrakas-Kontos, N.; Nikolaou, G.

    2017-09-01

    In this paper we propose an experimental method for the determination of potassium-40 (40K) radioactivity in commercial granite samples using x-ray fluorescence (XRF). The method correlates the total potassium concentration (yield) in samples deduced by XRF analysis with the radioactivity of the sample due to the 40K radionuclide. This method can be used in an undergraduate student laboratory. A brief theoretical background and description of the method, as well as some results and their interpretation, are presented.

  1. Carbon isotope ratios and isotopic correlations between components in fruit juices

    NASA Astrophysics Data System (ADS)

    Wierzchnicki, Ryszard

    2013-04-01

    Nowadays food products are defined by geographical origin, method of production and by some regulations concerning terms of their authenticity. Important data for confirm the authenticity of product are providing by isotopic methods of food control. The method checks crucial criteria which characterize the authenticity of inspected product. The European Union Regulations clearly show the tendency for application of the isotopic methods for food authenticity control (wine, honey, juice). The aim of the legislation steps is the protection of European market from possibility of the commercial frauds. Method of isotope ratio mass spectrometry is very effective tool for the use distinguishably the food products of various geographical origin. The basic problem for identification of the sample origin is the lack of databases of isotopic composition of components and information about the correlations of the data. The subject of the work was study the isotopic correlations existing between components of fruits. The chemical and instrumental methods of separation: water, sugars, organic acids and pulp from fruit were implemented. IRMS technique was used to measure isotopic composition of samples. The final results for original samples of fruits (apple, strawberry etc.) will be presented and discussed. Acknowledgement: This work was supported by the Polish Ministry of Science and Higher Education under grant NR12-0043-10/2010.

  2. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  3. Inter- and intraindividual correlations of background abundances of (2)H, (18)O and (17)O in human urine and implications for DLW measurements.

    PubMed

    Berman, E S F; Melanson, E L; Swibas, T; Snaith, S P; Speakman, J R

    2015-10-01

    The method of choice for measuring total energy expenditure in free-living individuals is the doubly labeled water (DLW) method. This experiment examined the behavior of natural background isotope abundance fluctuations within and between individuals over time to assess possible methods of accounting for variations in the background isotope abundances to potentially improve the precision of the DLW measurement. In this work, we measured natural background variations in (2)H, (18)O and (17)O in water from urine samples collected from 40 human subjects who resided in the same geographical area. Each subject provided a urine sample for 30 consecutive days. Isotopic abundances in the samples were measured using Off-Axis Integrated Cavity Output Spectroscopy. Autocorrelation analyses demonstrated that the background isotopes in a given individual were not temporally correlated over the time scales of typical DLW studies. Using samples obtained from different individuals on the same calendar day, cross-correlation analyses demonstrated that the background variations of different individuals were not correlated in time. However, the measured ratios of the three isotopes (2)H, (18)O and (17)O were highly correlated (R(2)=0.89-0.96). Although neither specific timing of DLW water studies nor intraindividual comparisons were found to be avenues for reducing the impact of background isotope abundance fluctuations on DLW studies, strong inter-isotope correlations within an individual confirm that use of a dosing ratio of 8‰:1‰ (0.6 p.p.m.: p.p.m.) optimizes DLW precision. Theoretical implications for the possible use of (17)O measurements within a DLW study require further study.

  4. Correlation of psychomotor skills and didactic performance among dental students in Saudi Arabia

    PubMed Central

    Afify, Ahmed R; Zawawi, Khalid H; Othman, Hisham I; Al-Dharrab, Ayman A

    2013-01-01

    Objectives The objective of this study is to investigate the correlation between the psychomotor skills and the academic performance of dental students. Methods Didactic and preclinical scores were collected for students who graduated from the Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia, in 2011. Three courses (Dental Anatomy, Removable Prosthodontic Denture, and Orthodontics) were selected. Correlations comparing didactic and practical scores were done for the total samples, then for the males and females separately. Results There was no significant correlation between the practical and didactic scores for the three courses for the total sample. There was a significant correlation between all three subjects in the didactic scores. For females, the results showed that there was only a significant correlation between the practical and didactic scores for Dental Anatomy. For males, no correlation was observed between the practical and didactic scores for all subjects. Conclusion In the present sample, didactic performance did not correlate well with the students’ psychomotor performance. PMID:24159266

  5. A novel tensile test method to assess texture and gaping in salmon fillets.

    PubMed

    Ashton, Thomas J; Michie, Ian; Johnston, Ian A

    2010-05-01

    A new tensile strength method was developed to quantify the force required to tear a standardized block of Atlantic salmon muscle with the aim of identifying those samples more prone to factory downgrading as a result of softness and fillet gaping. The new method effectively overcomes problems of sample attachment encountered with previous tensile strength tests. The repeatability and sensitivity and predictability of the new technique were evaluated against other common instrumental texture measurement methods. The relationship between sensory assessments of firmness and parameters from the instrumental texture methods was also determined. Data from the new method were shown to have the strongest correlations with gaping severity (r =-0.514, P < 0.001) and the highest level of repeatability of data when analyzing cold-smoked samples. The Warner Bratzler shear method gave the most repeatable data from fresh samples and had the highest correlations between fresh and smoked product from the same fish (r = 0.811, P < 0.001). A hierarchical cluster analysis placed the tensile test in the top cluster, alongside the Warner Bratzler method, demonstrating that it also yields adequate data with respect to these tests. None of the tested sensory analysis attributes showed significant relationships to mechanical tests except fillet firmness, with correlations (r) of 0.42 for cylinder probe maximum force (P = 0.005) and 0.31 for tensile work (P = 0.04). It was concluded that the tensile test method developed provides an important addition to the available tools for mechanical analysis of salmon quality, particularly with respect to the prediction of gaping during factory processing, which is a serious commercial problem. A novel, reliable method of measuring flesh tensile strength in salmon, provides data of relevance to gaping.

  6. Frontiers of Two-Dimensional Correlation Spectroscopy. Part 1. New concepts and noteworthy developments

    NASA Astrophysics Data System (ADS)

    Noda, Isao

    2014-07-01

    A comprehensive survey review of new and noteworthy developments, which are advancing forward the frontiers in the field of 2D correlation spectroscopy during the last four years, is compiled. This review covers books, proceedings, and review articles published on 2D correlation spectroscopy, a number of significant conceptual developments in the field, data pretreatment methods and other pertinent topics, as well as patent and publication trends and citation activities. Developments discussed include projection 2D correlation analysis, concatenated 2D correlation, and correlation under multiple perturbation effects, as well as orthogonal sample design, predicting 2D correlation spectra, manipulating and comparing 2D spectra, correlation strategy based on segmented data blocks, such as moving-window analysis, features like determination of sequential order and enhanced spectral resolution, statistical 2D spectroscopy using covariance and other statistical metrics, hetero-correlation analysis, and sample-sample correlation technique. Data pretreatment operations prior to 2D correlation analysis are discussed, including the correction for physical effects, background and baseline subtraction, selection of reference spectrum, normalization and scaling of data, derivatives spectra and deconvolution technique, and smoothing and noise reduction. Other pertinent topics include chemometrics and statistical considerations, peak position shift phenomena, variable sampling increments, computation and software, display schemes, such as color coded format, slice and power spectra, tabulation, and other schemes.

  7. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  8. A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts

    PubMed Central

    Onken, Arno; Dragoi, Valentin; Obermayer, Klaus

    2012-01-01

    Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392

  9. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  10. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    PubMed

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Graphene-enabled electron microscopy and correlated super-resolution microscopy of wet cells.

    PubMed

    Wojcik, Michal; Hauser, Margaret; Li, Wan; Moon, Seonah; Xu, Ke

    2015-06-11

    The application of electron microscopy to hydrated biological samples has been limited by high-vacuum operating conditions. Traditional methods utilize harsh and laborious sample dehydration procedures, often leading to structural artefacts and creating difficulties for correlating results with high-resolution fluorescence microscopy. Here, we utilize graphene, a single-atom-thick carbon meshwork, as the thinnest possible impermeable and conductive membrane to protect animal cells from vacuum, thus enabling high-resolution electron microscopy of wet and untreated whole cells with exceptional ease. Our approach further allows for facile correlative super-resolution and electron microscopy of wet cells directly on the culturing substrate. In particular, individual cytoskeletal actin filaments are resolved in hydrated samples through electron microscopy and well correlated with super-resolution results.

  12. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    NASA Astrophysics Data System (ADS)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  13. Evaluation of the furosine and homoarginine methods for determining reactive lysine in rumen-undegraded protein.

    PubMed

    Boucher, S E; Pedersen, C; Stein, H H; Schwab, C G

    2009-08-01

    Three samples of soybean meal (SBM), 3 samples of expeller SBM (SoyPlus, West Central Cooperative, Ralston, IA), 5 samples of distillers dried grains with solubles (DDGS), and 5 samples of fish meal were used to evaluate the furosine and homoarginine procedures to estimate reactive Lys in the rumen-undegraded protein fraction (RUP-Lys). One sample each of SBM, expeller SBM, and DDGS were subjected to additional heat treatment in the lab to ensure there was a wide range in reactive RUP-Lys content among the samples. Furosine is a secondary product of the initial stages of the Maillard reaction and can be used to calculate blocked Lys. Homoarginine is formed via the reaction of reactive Lys with O-methylisourea and can be used to calculate the concentration of reactive Lys. In previous experiments, each sample was ruminally incubated in situ for 16 h, and standardized RUP-Lys digestibility of the samples was determined in cecectomized roosters. All rumen-undegraded residue (RUR) samples were analyzed for furosine and Lys; however, only 9 of the 16 samples contained furosine, and only the 4 unheated DDGS samples contained appreciable amounts of furosine. Blocked RUP-Lys was calculated from the furosine and Lys concentrations of the RUR. Both the intact feed and RUR samples were evaluated using the homoarginine method. All samples were incubated with an O-methylisourea/BaOH solution for 72 h and analyzed for Lys and homoarginine concentrations. Reactive Lys concentrations of the intact feeds and RUR were calculated. Results of the experiment indicate that blocked RUP-Lys determined via the furosine method was negatively correlated with standardized RUP-Lys digestibility, and reactive RUP-Lys determined via the guanidination method was positively correlated with standardized RUP-Lys digestibility. Reactive Lys concentrations of the intact samples were also highly correlated with RUP-Lys digestibility. In conclusion, the furosine assay is useful in predicting RUP-Lys digestibility of DDGS samples, and the guanidination procedure can be used to predict RUP-Lys digestibility of SBM, expeller SBM, DDGS, and fish meal samples.

  14. Quantitative assessment of Naegleria fowleri and fecal indicator bacteria in brackish water of Lake Pontchartrain, Louisiana

    NASA Astrophysics Data System (ADS)

    Xue, J.; Sherchan, S. P.; Lamar, F. G.; Lin, S.; Lamori, J. G.

    2017-12-01

    Brackish water samples from Lake Pontchartrain in Louisiana were assessed for the presence of pathogenic amoeba Naegleria fowleri, which causes primary amoebic meningoencephalitis (PAM). In our study, quantitative polymerase chain reaction (qPCR) methods were used to determine N. fowleri, E. coli, and Enterococcus in water collected from Lake Pontchartrain. A total of 158 water samples were analyzed over the 10- month sampling period. Statistically significant positive correlation between water temperature and N. fowleri concentration was observed. N. fowleri target sequence was detected at 35.4% (56/158) of the water samples from ten sites around the Lake ranged from 11.6 GC/100 ml water to 457.8 GC/100 ml water. A single factor (ANOVA) analysis shows the average concentration of N. fowleri in summer (119.8 GC/100 ml) was significantly higher than in winter (58.6 GC/100 ml) (p < 0.01). Statistically significant positive correlations were found between N. fowleri and qPCR E. coli results and N. fowleri and colilert E. coli (culture method), respectively. A weak positive correlation between E. coli and Enterococcus was observed from both qPCR (r = 0.27, p < 0.05) and culture based method (r = 0.52, p < 0.05). Meanwhile, significant positive correlation between qPCR and culture based methods for E. coli (r = 0.30, p < 0.05) and Enterococcus concentration was observed (r = 0.26, p < 0.05), respectively. Future research is needed to determine whether sediment is a source of N. fowleri found in the water column.

  15. Adulteration detection in milk using infrared spectroscopy combined with two-dimensional correlation analysis

    NASA Astrophysics Data System (ADS)

    He, Bin; Liu, Rong; Yang, Renjie; Xu, Kexin

    2010-02-01

    Adulteration of milk and dairy products has brought serious threats to human health as well as enormous economic losses to the food industry. Considering the diversity of adulterants possibly mixed in milk, such as melamine, urea, tetracycline, sugar/salt and so forth, a rapid, widely available, high-throughput, cost-effective method is needed for detecting each of the components in milk at once. In this paper, a method using Fourier Transform Infrared spectroscopy (FTIR) combined with two-dimensional (2D) correlation spectroscopy is established for the discriminative analysis of adulteration in milk. Firstly, the characteristic peaks of the raw milk are found in the 4000-400 cm-1 region by its original spectra. Secondly, the adulterant samples are respectively detected with the same method to establish a spectral database for subsequent comparison. Then, 2D correlation spectra of the samples are obtained which have high time resolution and can provide information about concentration-dependent intensity changes not readily accessible from one-dimensional spectra. And the characteristic peaks in the synchronous 2D correlation spectra of the suspected samples are compared with those of raw milk. The differences among their synchronous spectra imply that the suspected milk sample must contain some kinds of adulterants. Melamine, urea, tetracycline and glucose adulterants in milk are identified respectively. This nondestructive method can be used for a correct discrimination on whether the milk and dairy products are adulterated with deleterious substances and it provides a new simple and cost-effective alternative to test the components of milk.

  16. [Tobacco quality analysis of industrial classification of different years using near-infrared (NIR) spectrum].

    PubMed

    Wang, Yi; Xiang, Ma; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui

    2012-11-01

    In this study, tobacco quality analysis of main Industrial classification of different years was carried out applying spectrum projection and correlation methods. The group of data was near-infrared (NIR) spectrum from Hongta Tobacco (Group) Co., Ltd. 5730 tobacco leaf Industrial classification samples from Yuxi in Yunnan province from 2007 to 2010 year were collected using near infrared spectroscopy, which from different parts and colors and all belong to tobacco varieties of HONGDA. The conclusion showed that, when the samples were divided to two part by the ratio of 2:1 randomly as analysis and verification sets in the same year, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients were above 0.98. The correlation coefficients between two different years applying spectrum projection were above 0.97. The highest correlation coefficient was the one between 2008 and 2009 year and the lowest correlation coefficient was the one between 2007 and 2010 year. At the same time, The study discussed a method to get the quantitative similarity values of different industrial classification samples. The similarity and consistency values were instructive in combination and replacement of tobacco leaf blending.

  17. Evaluation of the clinical sensitivity for the quantification of human immunodeficiency virus type 1 RNA in plasma: Comparison of the new COBAS TaqMan HIV-1 with three current HIV-RNA assays--LCx HIV RNA quantitative, VERSANT HIV-1 RNA 3.0 (bDNA) and COBAS AMPLICOR HIV-1 Monitor v1.5.

    PubMed

    Katsoulidou, Antigoni; Petrodaskalaki, Maria; Sypsa, Vana; Papachristou, Eleni; Anastassopoulou, Cleo G; Gargalianos, Panagiotis; Karafoulidou, Anastasia; Lazanas, Marios; Kordossis, Theodoros; Andoniadou, Anastasia; Hatzakis, Angelos

    2006-02-01

    The COBAS TaqMan HIV-1 test (Roche Diagnostics) was compared with the LCx HIV RNA quantitative assay (Abbott Laboratories), the Versant HIV-1 RNA 3.0 (bDNA) assay (Bayer) and the COBAS Amplicor HIV-1 Monitor v1.5 test (Roche Diagnostics), using plasma samples of various viral load levels from HIV-1-infected individuals. In the comparison of TaqMan with LCx, TaqMan identified as positive 77.5% of the 240 samples versus 72.1% identified by LCx assay, while their overall agreement was 94.6% and the quantitative results of samples that were positive by both methods were strongly correlated (r=0.91). Similarly, in the comparison of TaqMan with bDNA 3.0, both methods identified 76.3% of the 177 samples as positive, while their overall agreement was 95.5% and the quantitative results of samples that were positive by both methods were strongly correlated (r=0.95). Finally, in the comparison of TaqMan with Monitor v1.5, TaqMan identified 79.5% of the 156 samples as positive versus 80.1% identified by Monitor v1.5, while their overall agreement was 95.5% and the quantitative results of samples that were positive by both methods were strongly correlated (r=0.96). In conclusion, the new COBAS TaqMan HIV-1 test showed excellent agreement with other widely used commercially available tests for the quantitation of HIV-1 viral load.

  18. Grain reconstruction of porous media: application to a Bentheim sandstone.

    PubMed

    Thovert, J-F; Adler, P M

    2011-05-01

    The two-point correlation measured on a thin section can be used to derive the probability density of the radii of a population of penetrable spheres. The geometrical, transport, and deformation properties of samples derived by this method compare well with the properties of the digitized real sample and of the samples generated by the standard grain reconstruction method. © 2011 American Physical Society

  19. Computing physical properties with quantum Monte Carlo methods with statistical fluctuations independent of system size.

    PubMed

    Assaraf, Roland

    2014-12-01

    We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.

  20. The Petersen-Lincoln estimator and its extension to estimate the size of a shared population.

    PubMed

    Chao, Anne; Pan, H-Y; Chiang, Shu-Chuan

    2008-12-01

    The Petersen-Lincoln estimator has been used to estimate the size of a population in a single mark release experiment. However, the estimator is not valid when the capture sample and recapture sample are not independent. We provide an intuitive interpretation for "independence" between samples based on 2 x 2 categorical data formed by capture/non-capture in each of the two samples. From the interpretation, we review a general measure of "dependence" and quantify the correlation bias of the Petersen-Lincoln estimator when two types of dependences (local list dependence and heterogeneity of capture probability) exist. An important implication in the census undercount problem is that instead of using a post enumeration sample to assess the undercount of a census, one should conduct a prior enumeration sample to avoid correlation bias. We extend the Petersen-Lincoln method to the case of two populations. This new estimator of the size of the shared population is proposed and its variance is derived. We discuss a special case where the correlation bias of the proposed estimator due to dependence between samples vanishes. The proposed method is applied to a study of the relapse rate of illicit drug use in Taiwan. ((c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim).

  1. Correlating Intravital Multi-Photon Microscopy to 3D Electron Microscopy of Invading Tumor Cells Using Anatomical Reference Points

    PubMed Central

    Karreman, Matthia A.; Mercier, Luc; Schieber, Nicole L.; Shibue, Tsukasa; Schwab, Yannick; Goetz, Jacky G.

    2014-01-01

    Correlative microscopy combines the advantages of both light and electron microscopy to enable imaging of rare and transient events at high resolution. Performing correlative microscopy in complex and bulky samples such as an entire living organism is a time-consuming and error-prone task. Here, we investigate correlative methods that rely on the use of artificial and endogenous structural features of the sample as reference points for correlating intravital fluorescence microscopy and electron microscopy. To investigate tumor cell behavior in vivo with ultrastructural accuracy, a reliable approach is needed to retrieve single tumor cells imaged deep within the tissue. For this purpose, fluorescently labeled tumor cells were subcutaneously injected into a mouse ear and imaged using two-photon-excitation microscopy. Using near-infrared branding, the position of the imaged area within the sample was labeled at the skin level, allowing for its precise recollection. Following sample preparation for electron microscopy, concerted usage of the artificial branding and anatomical landmarks enables targeting and approaching the cells of interest while serial sectioning through the specimen. We describe here three procedures showing how three-dimensional (3D) mapping of structural features in the tissue can be exploited to accurately correlate between the two imaging modalities, without having to rely on the use of artificially introduced markers of the region of interest. The methods employed here facilitate the link between intravital and nanoscale imaging of invasive tumor cells, enabling correlating function to structure in the study of tumor invasion and metastasis. PMID:25479106

  2. Needs of the Learning Effect on Instructional Website for Vocational High School Students

    ERIC Educational Resources Information Center

    Lo, Hung-Jen; Fu, Gwo-Liang; Chuang, Kuei-Chih

    2013-01-01

    The purpose of study was to understand the correlation between the needs of the learning effect on instructional website for the vocational high school students. Our research applied the statistic methods of product-moment correlation, stepwise regression, and structural equation method to analyze the questionnaire with the sample size of 377…

  3. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  4. Exact sampling of graphs with prescribed degree correlations

    NASA Astrophysics Data System (ADS)

    Bassler, Kevin E.; Del Genio, Charo I.; Erdős, Péter L.; Miklós, István; Toroczkai, Zoltán

    2015-08-01

    Many real-world networks exhibit correlations between the node degrees. For instance, in social networks nodes tend to connect to nodes of similar degree and conversely, in biological and technological networks, high-degree nodes tend to be linked with low-degree nodes. Degree correlations also affect the dynamics of processes supported by a network structure, such as the spread of opinions or epidemics. The proper modelling of these systems, i.e., without uncontrolled biases, requires the sampling of networks with a specified set of constraints. We present a solution to the sampling problem when the constraints imposed are the degree correlations. In particular, we develop an exact method to construct and sample graphs with a specified joint-degree matrix, which is a matrix providing the number of edges between all the sets of nodes of a given degree, for all degrees, thus completely specifying all pairwise degree correlations, and additionally, the degree sequence itself. Our algorithm always produces independent samples without backtracking. The complexity of the graph construction algorithm is {O}({NM}) where N is the number of nodes and M is the number of edges.

  5. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  6. Spectral and correlation analysis with applications to middle-atmosphere radars

    NASA Technical Reports Server (NTRS)

    Rastogi, Prabhat K.

    1989-01-01

    The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.

  7. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  8. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  9. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  10. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  11. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  12. Information Encoding on a Pseudo Random Noise Radar Waveform

    DTIC Science & Technology

    2013-03-01

    quadrature mirror filter bank (QMFB) tree diagram [18] . . . . . . . . . . . 18 2.7 QMFB layer 3 contour plot for 7-bit barker code binary phase shift...test signal . . . . . . . . 20 2.9 Block diagram of the FFT accumulation method (FAM) time smoothing method to estimate the spectral correlation ... Samples A m pl itu de (b) Correlator output for an WGN pulse in a AWGN channel Figure 2.2: Effectiveness of correlation for SNR = -10 dB 10 2.3 Radar

  13. New method for stock-tank oil compositional analysis.

    PubMed

    McAndrews, Kristine; Nighswander, John; Kotzakoulakis, Konstantin; Ross, Paul; Schroeder, Helmut

    2009-01-01

    A new method for accurately determining stock-tank oil composition to normal pentatriacontane using gas chromatography is developed and validated. The new method addresses the potential errors associated with the traditional equipment and technique employed for extended hydrocarbon gas chromatography outside a controlled laboratory environment, such as on an offshore oil platform. In particular, the experimental measurement of stock-tank oil molecular weight with the freezing point depression technique and the use of an internal standard to find the unrecovered sample fraction are replaced with correlations for estimating these properties. The use of correlations reduces the number of necessary experimental steps in completing the required sample preparation and analysis, resulting in reduced uncertainty in the analysis.

  14. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  15. Comparison of Two Different Methods Used for Semen Evaluation: Analysis of Semen Samples from 1,055 Men.

    PubMed

    Dinçer, Murat; Kucukdurmaz, Faruk; Salabas, Emre; Ortac, Mazhar; Aktan, Gulsan; Kadioglu, Ates

    2017-01-01

    The aim of this study was to evaluate whether there is a difference between gravimetrically and volumetrically measured semen samples and to assess the impact of semen volume, density, and sperm count on the discrepancy between gravimetric and volumetric methods. This study was designed in an andrology laboratory setting and performed on semen samples of 1,055 men receiving infertility treatment. Semen volume was calculated by gravimetric and volumetric methods. The total sperm count, semen density and sperm viability were also examined according to recent version of World Health Organization manual. The median values for gravimetric and volumetric measurements were 3.44 g and 2.96 ml respectively. The numeric difference in semen volume between 2 methods was 0.48. The mean density of samples was 1.01 ± 0.46 g/ml (range 0.90-2.0 g/ml). The numeric difference between 2 methods gets higher as semen volume increases (p < 0.001). Gravimetric and volumetric semen volume measurements were strongly correlated for all samples and for each subgroup of semen volume, semen density and sperm count, with minimum correlation coefficient of 0.895 (p < 0.001). In conclusion, the gravimetric measurement provides higher results than volumetric one and numeric differences between 2 methods increase as semen volume increases. However, further studies are needed to offer the use of gravimetrical method, which was thought to minimize laboratory errors, particularly for a high amount of semen samples. © 2016 S. Karger AG, Basel.

  16. Evaluating different methods used in ethnobotanical and ecological studies to record plant biodiversity

    PubMed Central

    2014-01-01

    Background This study compares the efficiency of identifying the plants in an area of semi-arid Northeast Brazil by methods that a) access the local knowledge used in ethnobotanical studies using semi-structured interviews conducted within the entire community, an inventory interview conducted with two participants using the previously collected vegetation inventory, and a participatory workshop presenting exsiccates and photographs to 32 people and b) inventory the vegetation (phytosociology) in locations with different histories of disturbance using rectangular plots and quadrant points. Methods The proportion of species identified using each method was then compared with Cochran’s Q test. We calculated the use value (UV) of each species using semi-structured interviews; this quantitative index was correlated against values of the vegetation’s structural importance obtained from the sample plot method and point-centered quarter method applied in two areas with different historical usage. The analysis sought to correlate the relative importance of plants to the local community (use value - UV) with the ecological importance of the plants in the vegetation structure (importance value - IV; relative density - RD) by using different sampling methods to analyze the two areas. Results With regard to the methods used for accessing the local knowledge, a difference was observed among the ethnobotanical methods of surveying species (Q = 13.37, df = 2, p = 0.0013): 44 species were identified in the inventory interview, 38 in the participatory workshop and 33 in the semi-structured interviews with the community. There was either no correlation between the UV, relative density (RD) and importance value (IV) of some species, or this correlation was negative. Conclusion It was concluded that the inventory interview was the most efficient method for recording species and their uses, as it allowed more plants to be identified in their original environment. To optimize researchers’ time in future studies, the use of the point-centered quarter method rather than the sample plot method is recommended. PMID:24916833

  17. Graph reconstruction using covariance-based methods.

    PubMed

    Sulaimanov, Nurgazy; Koeppl, Heinz

    2016-12-01

    Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.

  18. Data representing two separate LC-MS methods for detection and quantification of water-soluble and fat-soluble vitamins in tears and blood serum.

    PubMed

    Khaksari, Maryam; Mazzoleni, Lynn R; Ruan, Chunhai; Kennedy, Robert T; Minerick, Adrienne R

    2017-04-01

    Two separate liquid chromatography (LC)-mass spectrometry (MS) methods were developed for determination and quantification of water-soluble and fat-soluble vitamins in human tear and blood serum samples. The water-soluble vitamin method was originally developed to detect vitamins B 1 , B 2 , B 3 (nicotinamide), B 5 , B 6 (pyridoxine), B 7 , B 9 and B 12 while the fat-soluble vitamin method detected vitamins A, D 3 , 25(OH)D 3, E and K 1 . These methods were then validated with tear and blood serum samples. In this data in brief article, we provide details on the two LC-MS methods development, methods sensitivity, as well as precision and accuracy for determination of vitamins in human tears and blood serum. These methods were then used to determine the vitamin concentrations in infant and parent samples under a clinical study which were reported in "Determination of Water-Soluble and Fat-Soluble Vitamins in Tears and Blood Serum of Infants and Parents by Liquid Chromatography/Mass Spectrometry DOI:10.1016/j.exer.2016.12.007 [1]". This article provides more details on comparison of vitamin concentrations in the samples with the ranges reported in the literature along with the medically accepted normal ranges. The details on concentrations below the limits of detection (LOD) and limits of quantification (LOQ) are also discussed. Vitamin concentrations were also compared and cross-correlated with clinical data and nutritional information. Significant differences and strongly correlated data were reported in [1]. This article provides comprehensive details on the data with slight differences or slight correlations.

  19. The Effects of Positively and Negatively Worded Items on the Factor Structure of the UCLA Loneliness Scale

    ERIC Educational Resources Information Center

    Dodeen, Hamzeh

    2015-01-01

    The purpose of this study was to evaluate the factor structure of the University of California, Los Angeles (UCLA) Loneliness Scale and examine possible wording effects on a sample of 1,429 students from the United Arab Emirates University. Correlated traits-correlated uniqueness as well as correlated traits-correlated methods were used to examine…

  20. Correlative Super-Resolution Microscopy: New Dimensions and New Opportunities.

    PubMed

    Hauser, Meghan; Wojcik, Michal; Kim, Doory; Mahmoudi, Morteza; Li, Wan; Xu, Ke

    2017-06-14

    Correlative microscopy, the integration of two or more microscopy techniques performed on the same sample, produces results that emphasize the strengths of each technique while offsetting their individual weaknesses. Light microscopy has historically been a central method in correlative microscopy due to its widespread availability, compatibility with hydrated and live biological samples, and excellent molecular specificity through fluorescence labeling. However, conventional light microscopy can only achieve a resolution of ∼300 nm, undercutting its advantages in correlations with higher-resolution methods. The rise of super-resolution microscopy (SRM) over the past decade has drastically improved the resolution of light microscopy to ∼10 nm, thus creating exciting new opportunities and challenges for correlative microscopy. Here we review how these challenges are addressed to effectively correlate SRM with other microscopy techniques, including light microscopy, electron microscopy, cryomicroscopy, atomic force microscopy, and various forms of spectroscopy. Though we emphasize biological studies, we also discuss the application of correlative SRM to materials characterization and single-molecule reactions. Finally, we point out current limitations and discuss possible future improvements and advances. We thus demonstrate how a correlative approach adds new dimensions of information and provides new opportunities in the fast-growing field of SRM.

  1. A novel method for effective diffusion coefficient measurement in gas diffusion media of polymer electrolyte fuel cells

    NASA Astrophysics Data System (ADS)

    Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan

    2014-07-01

    A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.

  2. Turbidity threshold sampling: Methods and instrumentation

    Treesearch

    Rand Eads; Jack Lewis

    2001-01-01

    Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

  3. Correlates of depression in bipolar disorder

    PubMed Central

    Moore, Paul J.; Little, Max A.; McSharry, Patrick E.; Goodwin, Guy M.; Geddes, John R.

    2014-01-01

    We analyse time series from 100 patients with bipolar disorder for correlates of depression symptoms. As the sampling interval is non-uniform, we quantify the extent of missing and irregular data using new measures of compliance and continuity. We find that uniformity of response is negatively correlated with the standard deviation of sleep ratings (ρ = –0.26, p = 0.01). To investigate the correlation structure of the time series themselves, we apply the Edelson–Krolik method for correlation estimation. We examine the correlation between depression symptoms for a subset of patients and find that self-reported measures of sleep and appetite/weight show a lower average correlation than other symptoms. Using surrogate time series as a reference dataset, we find no evidence that depression is correlated between patients, though we note a possible loss of information from sparse sampling. PMID:24352942

  4. Physically motivated global alignment method for electron tomography

    DOE PAGES

    Sanders, Toby; Prange, Micah; Akatay, Cem; ...

    2015-04-08

    Electron tomography is widely used for nanoscale determination of 3-D structures in many areas of science. Determining the 3-D structure of a sample from electron tomography involves three major steps: acquisition of sequence of 2-D projection images of the sample with the electron microscope, alignment of the images to a common coordinate system, and 3-D reconstruction and segmentation of the sample from the aligned image data. The resolution of the 3-D reconstruction is directly influenced by the accuracy of the alignment, and therefore, it is crucial to have a robust and dependable alignment method. In this paper, we develop amore » new alignment method which avoids the use of markers and instead traces the computed paths of many identifiable ‘local’ center-of-mass points as the sample is rotated. Compared with traditional correlation schemes, the alignment method presented here is resistant to cumulative error observed from correlation techniques, has very rigorous mathematical justification, and is very robust since many points and paths are used, all of which inevitably improves the quality of the reconstruction and confidence in the scientific results.« less

  5. Monte Carlo sampling in diffusive dynamical systems

    NASA Astrophysics Data System (ADS)

    Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.

    2018-05-01

    We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.

  6. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    PubMed

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  7. Clustering redshift distributions for the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Helsby, Jennifer

    Accurate determination of photometric redshifts and their errors is critical for large scale structure and weak lensing studies for constraining cosmology from deep, wide imaging surveys. Current photometric redshift methods suffer from bias and scatter due to incomplete training sets. Exploiting the clustering between a sample of galaxies for which we have spectroscopic redshifts and a sample of galaxies for which the redshifts are unknown can allow us to reconstruct the true redshift distribution of the unknown sample. Here we use this method in both simulations and early data from the Dark Energy Survey (DES) to determine the true redshift distributions of galaxies in photometric redshift bins. We find that cross-correlating with the spectroscopic samples currently used for training provides a useful test of photometric redshifts and provides reliable estimates of the true redshift distribution in a photometric redshift bin. We discuss the use of the cross-correlation method in validating template- or learning-based approaches to redshift estimation and its future use in Stage IV surveys.

  8. Graphical correlation of gaging-station records

    USGS Publications Warehouse

    Searcy, James K.

    1960-01-01

    A gaging-station record is a sample of the rate of flow of a stream at a given site. This sample can be used to estimate the magnitude and distribution of future flows if the record is long enough to be representative of the long-term flow of the stream. The reliability of a short-term record for estimating future flow characteristics can be improved through correlation with a long-term record. Correlation can be either numerical or graphical, but graphical correlation of gaging-station records has several advantages. The graphical correlation method is described in a step-by-step procedure with an illustrative problem of simple correlation, illustrative problems of three examples of multiple correlation--removing seasonal effect--and two examples of correlation of one record with two other records. Except in the problem on removal of seasonal effect, the same group of stations is used in the illustrative problems. The purpose of the problems is to illustrate the method--not to show the improvement that can result from multiple correlation as compared with simple correlation. Hydrologic factors determine whether a usable relation exists between gaging-station records. Statistics is only a tool for evaluating and using an existing relation, and the investigator must be guided by a knowledge of hydrology.

  9. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations.

    PubMed

    Kunz, Cornelia U; Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2017-03-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under- or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re-assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one-sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. © 2016 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Blinded versus unblinded estimation of a correlation coefficient to inform interim design adaptations

    PubMed Central

    Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim

    2016-01-01

    Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. PMID:27886393

  11. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  12. HydroCrowd: a citizen science snapshot to assess the spatial control of nitrogen solutes in surface waters

    PubMed Central

    Breuer, Lutz; Hiery, Noreen; Kraft, Philipp; Bach, Martin; Aubert, Alice H.; Frede, Hans-Georg

    2015-01-01

    We organized a crowdsourcing experiment in the form of a snapshot sampling campaign to assess the spatial distribution of nitrogen solutes, namely, nitrate, ammonium and dissolved organic nitrogen (DON), in German surface waters. In particular, we investigated (i) whether crowdsourcing is a reasonable sampling method in hydrology and (ii) what the effects of population density, soil humus content and arable land were on actual nitrogen solute concentrations and surface water quality. The statistical analyses revealed a significant correlation between nitrate and arable land (0.46), as well as soil humus content (0.37) but a weak correlation with population density (0.12). DON correlations were weak but significant with humus content (0.14) and arable land (0.13). The mean contribution of DON to total dissolved nitrogen was 22%. Samples were classified as water quality class II or above, following the European Water Framework Directive for nitrate and ammonium (53% and 82%, respectively). Crowdsourcing turned out to be a useful method to assess the spatial distribution of stream solutes, as considerable amounts of samples were collected with comparatively little effort. PMID:26561200

  13. Language Sampling for Preschoolers With Severe Speech Impairments

    PubMed Central

    Ragsdale, Jamie; Bustos, Aimee

    2016-01-01

    Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110

  14. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  15. Application of the digital image correlation method in the study of cohesive coarse soil deformations

    NASA Astrophysics Data System (ADS)

    Kogut, Janusz P.; Tekieli, Marcin

    2018-04-01

    Non-contact video measurement methods are used to extend the capabilities of standard measurement systems, based on strain gauges or accelerometers. In most cases, they are able to provide more accurate information about the material or construction being tested than traditional sensors, while maintaining a high resolution and measurement stability. With the use of optical methods, it is possible to generate a full field of displacement on the surface of the test sample. The displacement value is the basic (primary) value determined using optical methods, and it is possible to determine the size of the derivative in the form of a sample deformation. This paper presents the application of a non-contact optical method to investigate the deformation of coarse soil material. For this type of soil, it is particularly difficult to obtain basic strength parameters. The use of a non-contact optical method, followed by a digital image correlation (DIC) study of the sample obtained during the tests, effectively completes the description of the behaviour of this type of material.

  16. Enhancing the sensitivity of fluorescence correlation spectroscopy by using time-correlated single photon counting.

    PubMed

    Lamb, D C; Müller, B K; Bräuchle, C

    2005-10-01

    Fluorescence correlation spectroscopy (FCS) and fluorescence cross-correlation spectroscopy (FCCS) are methods that extract information about a sample from the influence of thermodynamic equilibrium fluctuations on the fluorescence intensity. This method allows dynamic information to be obtained from steady state equilibrium measurements and its popularity has dramatically increased in the last 10 years due to the development of high sensitivity detectors and its combination with confocal microscopy. Using time-correlated single-photon counting (TCSPC) detection and pulsed excitation, information over the duration of the excited state can be extracted and incorporated in the analysis. In this short review, we discuss new methodologies that have recently emerged which incorporated fluorescence lifetime information or TCSPC data in the FCS and FCCS analysis. Time-gated FCS discriminates between which photons are to be incorporated in the analysis dependent upon their arrival time after excitation. This allows for accurate FCS measurements in the presence of fluorescent background, determination of sample homogeneity, and the ability to distinguish between static and dynamic heterogeneities. A similar method, time-resolved FCS can be used to resolve the individual correlation functions from multiple fluorophores through the different fluorescence lifetimes. Pulsed interleaved excitation (PIE) encodes the excitation source into the TCSPC data. PIE can be used to perform dual-channel FCCS with a single detector and allows elimination of spectral cross-talk with dual-channel detection. For samples that undergo fluorescence resonance energy transfer (FRET), quantitative FCCS measurements can be performed in spite of the FRET and the static FRET efficiency can be determined.

  17. WGCNA: an R package for weighted correlation network analysis.

    PubMed

    Langfelder, Peter; Horvath, Steve

    2008-12-29

    Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.

  18. WGCNA: an R package for weighted correlation network analysis

    PubMed Central

    Langfelder, Peter; Horvath, Steve

    2008-01-01

    Background Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. Results The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. Conclusion The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at . PMID:19114008

  19. Comparison of macronutrient contents in human milk measured using mid-infrared human milk analyser in a field study vs. chemical reference methods.

    PubMed

    Zhu, Mei; Yang, Zhenyu; Ren, Yiping; Duan, Yifan; Gao, Huiyu; Liu, Biao; Ye, Wenhui; Wang, Jie; Yin, Shian

    2017-01-01

    Macronutrient contents in human milk are the common basis for estimating these nutrient requirements for both infants and lactating women. A mid-infrared human milk analyser (HMA, Miris, Sweden) was recently developed for determining macronutrient levels. The purpose of the study is to compare the accuracy and precision of HMA method with fresh milk samples in the field studies with chemical methods with frozen samples in the lab. Full breast milk was collected using electric pumps and fresh milk was analyzed in the field studies using HMA. All human milk samples were thawed and analyzed with chemical reference methods in the lab. The protein, fat and total solid levels were significantly correlated between the two methods and the correlation coefficient was 0.88, 0.93 and 0.78, respectively (p  <  0.001). The mean protein content was significantly lower and the mean fat level was significantly greater when measured using HMA method (1.0 g 100 mL -1 vs 1.2 g 100 mL -1 and 3. 7 g 100 mL -1 vs 3.2 g 100 mL -1 , respectively, p  <  0.001). Thus, linear recalibration could be used to improve mean estimation for both protein and fat. There was no significant correlation for lactose between the two methods (p  >  0.05). There was no statistically significant difference in the mean total solid concentration (12.2 g 100 mL -1 vs 12.3 g 100 mL -1 , p  >  0.05). Overall, HMA might be used to analyze macronutrients in fresh human milk with acceptable accuracy and precision after recalibrating fat and protein levels of field samples. © 2016 John Wiley & Sons Ltd.

  20. Quantitative validation of a nonlinear histology-MRI coregistration method using Generalized Q-sampling Imaging in complex human cortical white matter

    PubMed Central

    Gangolli, Mihika; Holleran, Laurena; Kim, Joong Hee; Stein, Thor D.; Alvarez, Victor; McKee, Ann C.; Brody, David L.

    2017-01-01

    Advanced diffusion MRI methods have recently been proposed for detection of pathologies such as traumatic axonal injury and chronic traumatic encephalopathy which commonly affect complex cortical brain regions. However, radiological-pathological correlations in human brain tissue that detail the relationship between the multi-component diffusion signal and underlying pathology are lacking. We present a nonlinear voxel based two dimensional coregistration method that is useful for matching diffusion signals to quantitative metrics of high resolution histological images. When validated in ex vivo human cortical tissue at a 250 × 250 × 500 micron spatial resolution, the method proved robust in correlations between generalized q-sampling imaging and histologically based white matter fiber orientations, with r = 0.94 for the primary fiber direction and r = 0.88 for secondary fiber direction in each voxel. Importantly, however, the correlation was substantially worse with reduced spatial resolution or with fiber orientations derived using a diffusion tensor model. Furthermore, we have detailed a quantitative histological metric of white matter fiber integrity termed power coherence capable of distinguishing between architecturally complex but intact white matter from disrupted white matter regions. These methods may allow for more sensitive and specific radiological-pathological correlations of neurodegenerative diseases affecting complex gray and white matter. PMID:28365421

  1. Can the Roche hemolysis index be used for automated determination of cell-free hemoglobin? A comparison to photometric assays.

    PubMed

    Petrova, Darinka Todorova; Cocisiu, Gabriela Ariadna; Eberle, Christoph; Rhode, Karl-Heinz; Brandhorst, Gunnar; Walson, Philip D; Oellerich, Michael

    2013-09-01

    The aim of this study was to develop a novel method for automated quantification of cell-free hemoglobin (fHb) based on the HI (Roche Diagnostics). The novel fHb method based on the HI was correlated with fHb measured using the triple wavelength methods of both Harboe [fHb, g/L = (0.915 * HI + 2.634)/100] and Fairbanks et al. [fHb, g/L = (0.917 * HI + 2.131)/100]. fHb concentrations were estimated from the HI using the Roche Modular automated platform in self-made and commercially available quality controls, as well as samples from a proficiency testing scheme (INSTAND). The fHb using Roche automated HI results were then compared to results obtained using the traditional spectrophotometric assays for one hundred plasma samples with varying degrees of hemolysis, lipemia and/or bilirubinemia. The novel method using automated HI quantification on the Roche Modular clinical chemistry platform correlated well with results using the classical methods in the 100 patient samples (Harboe: r = 0.9284; Fairbanks et al.: r = 0.9689) and recovery was good for self-made controls. However, commercially available quality controls showed poor recovery due to an unidentified matrix problem. The novel method produced reliable determination of fHb in samples without interferences. However, poor recovery using commercially available fHb quality control samples currently greatly limits its usefulness. © 2013.

  2. Phase-resolved acoustic radiation force optical coherence elastography

    NASA Astrophysics Data System (ADS)

    Qi, Wenjuan; Chen, Ruimin; Chou, Lidek; Liu, Gangjun; Zhang, Jun; Zhou, Qifa; Chen, Zhongping

    2012-11-01

    Many diseases involve changes in the biomechanical properties of tissue, and there is a close correlation between tissue elasticity and pathology. We report on the development of a phase-resolved acoustic radiation force optical coherence elastography method (ARF-OCE) to evaluate the elastic properties of tissue. This method utilizes chirped acoustic radiation force to produce excitation along the sample's axial direction, and it uses phase-resolved optical coherence tomography (OCT) to measure the vibration of the sample. Under 500-Hz square wave modulated ARF signal excitation, phase change maps of tissue mimicking phantoms are generated by the ARF-OCE method, and the resulting Young's modulus ratio is correlated with a standard compression test. The results verify that this technique could efficiently measure sample elastic properties accurately and quantitatively. Furthermore, a three-dimensional ARF-OCE image of the human atherosclerotic coronary artery is obtained. The result indicates that our dynamic phase-resolved ARF-OCE method can delineate tissues with different mechanical properties.

  3. [Determination of hydroxyproline in liver tissue by hydrophilic interaction chromatography-quadrupole/electrostatic field orbitrap high resolution mass spectrometry].

    PubMed

    Liu, Wei; Qi, Shenglan; Xu, Ying; Xiao, Zhun; Fu, Yadong; Chen, Jiamei; Yang, Tao; Liu, Ping

    2017-12-08

    A method for the determination of hydroxyproline (Hyp) in liver tissue of mice by hydrophilic interaction chromatography-quadrupole/electrostatic field orbitrap high resolution mass spectrometry (HILIC-HRMS) was developed. The liver tissue samples of normal mice and liver fibrosis mice induced by carbon tetrachloride were hydrolyzed by concentrated hydrochloric acid. After filtrated and diluted by solution, the diluent was separated on an Hypersil GOLD HILIC column (100 mm×2.1 mm, 3 μm). Water-acetonitrile (28:72, v/v)were used as the mobile phases with isocratic elution. Finally, the target analytes were detected in positive model by HRMS equipped with an electrospray ionization source. The linear range of hydroxyproline was from 0.78 to 100.00 μg/L with the correlation coefficient ( R 2 ) of 0.9983. The limit of quantification was 0.78 μg/L. By detecting the spiked samples, the recoveries were in the range of 97.4%-100.9% with the relative standard deviations (RSDs) between 1.4% and 2.0%. In addition, comparison of the measurement results by this method and the chloramine T method was proceeded. It was found that the linear correlation between the two methods was very good, and the Pearson correlation coefficient was 0.927. And this method had simpler operation procedure and higher accuracy than chloramine T method. This method can be used for the quick determination of hydroxyproline in liver tissue samples.

  4. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  5. Is automated kinetic measurement superior to end-point for advanced oxidation protein product?

    PubMed

    Oguz, Osman; Inal, Berrin Bercik; Emre, Turker; Ozcan, Oguzhan; Altunoglu, Esma; Oguz, Gokce; Topkaya, Cigdem; Guvenen, Guvenc

    2014-01-01

    Advanced oxidation protein product (AOPP) was first described as an oxidative protein marker in chronic uremic patients and measured with a semi-automatic end-point method. Subsequently, the kinetic method was introduced for AOPP assay. We aimed to compare these two methods by adapting them to a chemistry analyzer and to investigate the correlation between AOPP and fibrinogen, the key molecule responsible for human plasma AOPP reactivity, microalbumin, and HbA1c in patients with type II diabetes mellitus (DM II). The effects of EDTA and citrate-anticogulated tubes on these two methods were incorporated into the study. This study included 93 DM II patients (36 women, 57 men) with HbA1c levels > or = 7%, who were admitted to the diabetes and nephrology clinics. The samples were collected in EDTA and in citrate-anticoagulated tubes. Both methods were adapted to a chemistry analyzer and the samples were studied in parallel. In both types of samples, we found a moderate correlation between the kinetic and the endpoint methods (r = 0.611 for citrate-anticoagulated, r = 0.636 for EDTA-anticoagulated, p = 0.0001 for both). We found a moderate correlation between fibrinogen-AOPP and microalbumin-AOPP levels only in the kinetic method (r = 0.644 and 0.520 for citrate-anticoagulated; r = 0.581 and 0.490 for EDTA-anticoagulated, p = 0.0001). We conclude that adaptation of the end-point method to automation is more difficult and it has higher between-run CV% while application of the kinetic method is easier and it may be used in oxidative stress studies.

  6. Analyzing the Relationship between Positive Psychological Capital and Organizational Commitment of the Teachers

    ERIC Educational Resources Information Center

    Yalcin, Sinan

    2016-01-01

    In this study it was aimed to determine the relationship between teachers' positive psychological capital levels and organisational commitment. The study was conducted as a correlational survey which is one of the quantitative methods. The sample group consists of 244 teachers selected by using random sampling method among 1270 teachers working in…

  7. On the use of two-time correlation functions for X-ray photon correlation spectroscopy data analysis.

    PubMed

    Bikondoa, Oier

    2017-04-01

    Multi-time correlation functions are especially well suited to study non-equilibrium processes. In particular, two-time correlation functions are widely used in X-ray photon correlation experiments on systems out of equilibrium. One-time correlations are often extracted from two-time correlation functions at different sample ages. However, this way of analysing two-time correlation functions is not unique. Here, two methods to analyse two-time correlation functions are scrutinized, and three illustrative examples are used to discuss the implications for the evaluation of the correlation times and functional shape of the correlations.

  8. Comparison between Thermal Desorption Tubes and Stainless Steel Canisters Used for Measuring Volatile Organic Compounds in Petrochemical Factories

    PubMed Central

    Chang, Cheng-Ping; Lin, Tser-Cheng; Lin, Yu-Wen; Hua, Yi-Chun; Chu, Wei-Ming; Lin, Tzu-Yu; Lin, Yi-Wen; Wu, Jyun-De

    2016-01-01

    Objective: The purpose of this study was to compare thermal desorption tubes and stainless steel canisters for measuring volatile organic compounds (VOCs) emitted from petrochemical factories. Methods: Twelve petrochemical factories in the Mailiao Industrial Complex were recruited for conducting the measurements of VOCs. Thermal desorption tubes and 6-l specially prepared stainless steel canisters were used to simultaneously perform active sampling of environmental air samples. The sampling time of the environmental air samples was set up on 6h close to a full work shift of the workers. A total of 94 pairwise air samples were collected by using the thermal adsorption tubes and stainless steel canisters in these 12 factories in the petrochemical industrial complex. To maximize the number of comparative data points, all the measurements from all the factories in different sampling times were lumped together to perform a linear regression analysis for each selected VOC. Pearson product–moment correlation coefficient was used to examine the correlation between the pairwise measurements of these two sampling methods. A paired t-test was also performed to examine whether the difference in the concentrations of each selected VOC measured by the two methods was statistically significant. Results: The correlation coefficients of seven compounds, including acetone, n-hexane, benzene, toluene, 1,2-dichloroethane, 1,3-butadiene, and styrene were >0.80 indicating the two sampling methods for these VOCs’ measurements had high consistency. The paired t-tests for the measurements of n-hexane, benzene, m/p-xylene, o-xylene, 1,2-dichloroethane, and 1,3-butadiene showed statistically significant difference (P-value < 0.05). This indicated that the two sampling methods had various degrees of systematic errors. Looking at the results of six chemicals and these systematic errors probably resulted from the differences of the detection limits in the two sampling methods for these VOCs. Conclusions: The comparison between the concentrations of each of the 10 selected VOCs measured by the two sampling methods indicted that the thermal desorption tubes provided high accuracy and precision measurements for acetone, benzene, and 1,3-butadiene. The accuracy and precision of using the thermal desorption tubes for measuring the VOCs can be improved due to new developments in sorbent materials, multi-sorbent designs, and thermal desorption instrumentation. More applications of thermal desorption tubes for measuring occupational and environmental hazardous agents can be anticipated. PMID:26585828

  9. Pre-analytical effects of blood sampling and handling in quantitative immunoassays for rheumatoid arthritis.

    PubMed

    Zhao, Xiaoyan; Qureshi, Ferhan; Eastman, P Scott; Manning, William C; Alexander, Claire; Robinson, William H; Hesterberg, Lyndal K

    2012-04-30

    Variability in pre-analytical blood sampling and handling can significantly impact results obtained in quantitative immunoassays. Understanding the impact of these variables is critical for accurate quantification and validation of biomarker measurements. Particularly, in the design and execution of large clinical trials, even small differences in sample processing and handling can have dramatic effects in analytical reliability, results interpretation, trial management and outcome. The effects of two common blood sampling methods (serum vs. plasma) and two widely-used serum handling methods (on the clot with ambient temperature shipping, "traditional", vs. centrifuged with cold chain shipping, "protocol") on protein and autoantibody concentrations were examined. Matched serum and plasma samples were collected from 32 rheumatoid arthritis (RA) patients representing a wide range of disease activity status. Additionally, a set of matched serum samples with two sample handling methods was collected. One tube was processed per manufacturer's instructions and shipped overnight on cold packs (protocol). The matched tube, without prior centrifugation, was simultaneously shipped overnight at ambient temperatures (traditional). Upon delivery, the traditional tube was centrifuged. All samples were subsequently aliquoted and frozen prior to analysis of protein and autoantibody biomarkers. Median correlation between paired serum and plasma across all autoantibody assays was 0.99 (0.98-1.00) with a median % difference of -3.3 (-7.5 to 6.0). In contrast, observed protein biomarker concentrations were significantly affected by sample types, with median correlation of 0.99 (0.33-1.00) and a median % difference of -10 (-55 to 23). When the two serum collection/handling methods were compared, the median correlation between paired samples for autoantibodies was 0.99 (0.91-1.00) with a median difference of 4%. In contrast, significant increases were observed in protein biomarker concentrations among certain biomarkers in samples processed with the 'traditional' method. Autoantibody quantification appears robust to both sample type (plasma vs. serum) and pre-analytical sample collection/handling methods (protocol vs. traditional). In contrast, for non-antibody protein biomarker concentrations, sample type had a significant impact; plasma samples generally exhibit decreased protein biomarker concentrations relative to serum. Similarly, sample handling significantly impacted the variability of protein biomarker concentrations. When biomarker concentrations are combined algorithmically into a single test score such as a multi-biomarker disease activity test for rheumatoid arthritis (MBDA), changes in protein biomarker concentrations may result in a bias of the score. These results illustrate the importance of characterizing pre-analytical methodology, sample type, sample processing and handling procedures for clinical testing in order to ensure test accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Validation of an automated fluorescein method for determining bromide in water

    USGS Publications Warehouse

    Fishman, M. J.; Schroder, L.J.; Friedman, L.C.

    1985-01-01

    Surface, atmospheric precipitation and deionized water samples were spiked with ??g l-1 concentrations of bromide, and the solutions stored in polyethylene and polytetrafluoroethylene bottles. Bromide was determined periodically for 30 days. Automated fluorescein and ion chromatography methods were used to determine bromide in these prepared samples. Analysis of the data by the paired t-test indicates that the two methods are not significantly different at a probability of 95% for samples containing from 0.015 to 0.5 mg l-1 of bromide. The correlation coefficient for the same sets of paired data is 0.9987. Recovery data, except for the surface water samples to which 0.005 mg l-1 of bromide was added, range from 89 to 112%. There appears to be no loss of bromide from solution in either type of container.Surface, atmospheric precipitation and deionized water samples were spiked with mu g l** minus **1 concentrations of bromide, and the solutions stored in polyethylene and polytetrafluoroethylene bottles. Bromide was determined periodically for 30 days. Automated fluorescein and ion chromatography methods were used to determine bromide in these prepared samples. Analysis of the data by the paired t-test indicates that the two methods are not significantly different at a probability of 95% for samples containing from 0. 015 to 0. 5 mg l** minus **1 of bromide. The correlation coefficient for the same sets of paired data is 0. 9987. Recovery data, except for the surface water samples to which 0. 005 mg l** minus **1 of bromide was added, range from 89 to 112%. Refs.

  11. The role of sample preparation in interpretation of trace element concentration variability in moss bioindication studies

    USGS Publications Warehouse

    Migaszewski, Z.M.; Lamothe, P.J.; Crock, J.G.; Galuszka, A.; Dolegowska, S.

    2011-01-01

    Trace element concentrations in plant bioindicators are often determined to assess the quality of the environment. Instrumental methods used for trace element determination require digestion of samples. There are different methods of sample preparation for trace element analysis, and the selection of the best method should be fitted for the purpose of a study. Our hypothesis is that the method of sample preparation is important for interpretation of the results. Here we compare the results of 36 element determinations performed by ICP-MS on ashed and on acid-digested (HNO3, H2O2) samples of two moss species (Hylocomium splendens and Pleurozium schreberi) collected in Alaska and in south-central Poland. We found that dry ashing of the moss samples prior to analysis resulted in considerably lower detection limits of all the elements examined. We also show that this sample preparation technique facilitated the determination of interregional and interspecies differences in the chemistry of trace elements. Compared to the Polish mosses, the Alaskan mosses displayed more positive correlations of the major rock-forming elements with ash content, reflecting those elements' geogenic origin. Of the two moss species, P. schreberi from both Alaska and Poland was also highlighted by a larger number of positive element pair correlations. The cluster analysis suggests that the more uniform element distribution pattern of the Polish mosses primarily reflects regional air pollution sources. Our study has shown that the method of sample preparation is an important factor in statistical interpretation of the results of trace element determinations. ?? 2010 Springer-Verlag.

  12. Approximation of Confidence Limits on Sample Semivariograms From Single Realizations of Spatially Correlated Random Fields

    NASA Astrophysics Data System (ADS)

    Shafer, J. M.; Varljen, M. D.

    1990-08-01

    A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.

  13. Level of Discipline among University Academic Staff as a Correlate of University Development in Nigeria

    ERIC Educational Resources Information Center

    Uhoman, Anyi Mary

    2017-01-01

    This study entitled "Level of Discipline Among University Academic Staff as a Correlate of University Development in Nigeria" adopted the correlation design with a population of 2,301 academic staff purposively selected from four Universities in the North-Central Geo-Political zone of Nigeria. The Stratified Random Sampling Method was…

  14. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  15. Three dimensional reliability analyses of currently used methods for assessment of sagittal jaw discrepancy

    PubMed Central

    Almaqrami, Bushra-Sufyan; Alhammadi, Maged-Sultan

    2018-01-01

    Background The objective of this study was to analyse three dimensionally the reliability and correlation of angular and linear measurements in assessment of anteroposterior skeletal discrepancy. Material and Methods In this retrospective cross sectional study, a sample of 213 subjects were three-dimensionally analysed from cone-beam computed tomography scans. The sample was divided according to three dimensional measurement of anteroposterior relation (ANB angle) into three groups (skeletal Class I, Class II and Class III). The anterior-posterior cephalometric indicators were measured on volumetric images using Anatomage software (InVivo5.2). These measurements included three angular and seven linear measurements. Cross tabulations were performed to correlate the ANB angle with each method. Intra-class Correlation Coefficient (ICC) test was applied for the difference between the two reliability measurements. P value of < 0.05 was considered significant. Results There was a statistically significant (P<0.05) agreement between all methods used with variability in assessment of different anteroposterior relations. The highest correlation was between ANB and DSOJ (0.913), strong correlation with AB/FH, AB/SN/, MM bisector, AB/PP, Wits appraisal (0.896, 0.890, 0.878, 0.867,and 0.858, respectively), moderate with AD/SN and Beta angle (0.787 and 0.760), and weak correlation with corrected ANB angle (0.550). Conclusions Conjunctive usage of ANB angle with DSOJ, AB/FH, AB/SN/, MM bisector, AB/PP and Wits appraisal in 3D cephalometric analysis provide a more reliable and valid indicator of the skeletal anteroposterior relationship. Clinical relevance: Most of orthodontic literature depends on single method (ANB) with its drawbacks in assessment of skeletal discrepancy which is a cardinal factors for proper treatment planning, this study assessed three dimensionally the degree of correlation between all available methods to make clinical judgement more accurate based on more than one method of assessment. Key words:Anteroposterior relationships, ANB angle, Three-dimension, CBCT. PMID:29750096

  16. Correlation between Wavelength Dispersive X-ray Fluorescence (WDXRF) analysis of hardened concrete for chlorides vs. Atomic Absorption (AA) analysis in accordance with AASHTO T- 260; sampling and testing for chloride ion in concrete and concrete raw mater

    DOT National Transportation Integrated Search

    2014-04-01

    A correlation between Wavelength Dispersive X-ray Fluorescence(WDXRF) analysis of Hardened : Concrete for Chlorides and Atomic Absorption (AA) analysis (current method AASHTO T-260, procedure B) has been : found and a new method of analysis has been ...

  17. Comparing conventional Descriptive Analysis and Napping®-UFP against physiochemical measurements: a case study using apples.

    PubMed

    Pickup, William; Bremer, Phil; Peng, Mei

    2018-03-01

    The extensive time and cost associated with conventional sensory profiling methods has spurred sensory researchers to develop rapid method alternatives, such as Napping® with Ultra-Flash Profiling (UFP). Napping®-UFP generates sensory maps by requiring untrained panellists to separate samples based on perceived sensory similarities. Evaluations of this method have been restrained to manufactured/formulated food models, and predominantly structured on comparisons against the conventional descriptive method. The present study aims to extend the validation of Napping®-UFP (N = 72) to natural biological products; and to evaluate this method against Descriptive Analysis (DA; N = 8) with physiochemical measurements as an additional evaluative criterion. The results revealed that sample configurations generated by DA and Napping®-UFP were not significantly correlated (RV = 0.425, P = 0.077); however, they were both correlated with the product map generated based on the instrumental measures (P < 0.05). The finding also noted that sample characterisations from DA and Napping®-UFP were driven by different sensory attributes, indicating potential structural differences between these two methods in configuring samples. Overall, these findings lent support for the extended use of Napping®-UFP for evaluations of natural biological products. Although DA was shown to be a better method for establishing sensory-instrumental relationships, Napping®-UFP exhibited strengths in generating informative sample configurations based on holistic perception of products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  18. Determination of rivaroxaban in patient's plasma samples by anti-Xa chromogenic test associated to High Performance Liquid Chromatography tandem Mass Spectrometry (HPLC-MS/MS).

    PubMed

    Derogis, Priscilla Bento Matos; Sanches, Livia Rentas; de Aranda, Valdir Fernandes; Colombini, Marjorie Paris; Mangueira, Cristóvão Luis Pitangueira; Katz, Marcelo; Faulhaber, Adriana Caschera Leme; Mendes, Claudio Ernesto Albers; Ferreira, Carlos Eduardo Dos Santos; França, Carolina Nunes; Guerra, João Carlos de Campos

    2017-01-01

    Rivaroxaban is an oral direct factor Xa inhibitor, therapeutically indicated in the treatment of thromboembolic diseases. As other new oral anticoagulants, routine monitoring of rivaroxaban is not necessary, but important in some clinical circumstances. In our study a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was validated to measure rivaroxaban plasmatic concentration. Our method used a simple sample preparation, protein precipitation, and a fast chromatographic run. It was developed a precise and accurate method, with a linear range from 2 to 500 ng/mL, and a lower limit of quantification of 4 pg on column. The new method was compared to a reference method (anti-factor Xa activity) and both presented a good correlation (r = 0.98, p < 0.001). In addition, we validated hemolytic, icteric or lipemic plasma samples for rivaroxaban measurement by HPLC-MS/MS without interferences. The chromogenic and HPLC-MS/MS methods were highly correlated and should be used as clinical tools for drug monitoring. The method was applied successfully in a group of 49 real-life patients, which allowed an accurate determination of rivaroxaban in peak and trough levels.

  19. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    PubMed

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  20. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution

    PubMed Central

    Han, Fang; Liu, Han

    2016-01-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson’s sample correlation matrix. Although Pearson’s sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall’s tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall’s tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall’s tau correlation matrix and the latent Pearson’s correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of “effective rank” in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a “sign subgaussian condition” which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition. PMID:28337068

  1. A New Method for Calculating Counts in Cells

    NASA Astrophysics Data System (ADS)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  2. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  3. Comparative Evaluation of Four Real-Time PCR Methods for the Quantitative Detection of Epstein-Barr Virus from Whole Blood Specimens.

    PubMed

    Buelow, Daelynn; Sun, Yilun; Tang, Li; Gu, Zhengming; Pounds, Stanley; Hayden, Randall

    2016-07-01

    Monitoring of Epstein-Barr virus (EBV) load in immunocompromised patients has become integral to their care. An increasing number of reagents are available for quantitative detection of EBV; however, there are little published comparative data. Four real-time PCR systems (one using laboratory-developed reagents and three using analyte-specific reagents) were compared with one another for detection of EBV from whole blood. Whole blood specimens seeded with EBV were used to determine quantitative linearity, analytical measurement range, lower limit of detection, and CV for each assay. Retrospective testing of 198 clinical samples was performed in parallel with all methods; results were compared to determine relative quantitative and qualitative performance. All assays showed similar performance. No significant difference was found in limit of detection (3.12-3.49 log10 copies/mL; P = 0.37). A strong qualitative correlation was seen with all assays that used clinical samples (positive detection rates of 89.5%-95.8%). Quantitative correlation of clinical samples across assays was also seen in pairwise regression analysis, with R(2) ranging from 0.83 to 0.95. Normalizing clinical sample results to IU/mL did not alter the quantitative correlation between assays. Quantitative EBV detection by real-time PCR can be performed over a wide linear dynamic range, using three different commercially available reagents and laboratory-developed methods. EBV was detected with comparable sensitivity and quantitative correlation for all assays. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  4. Authentication of virgin olive oil by a novel curve resolution approach combined with visible spectroscopy.

    PubMed

    Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús

    2017-04-01

    Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Application of partial least squares near-infrared spectral classification in diabetic identification

    NASA Astrophysics Data System (ADS)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  6. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    NASA Astrophysics Data System (ADS)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  7. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation.

    PubMed

    Peter, Emanuel K

    2017-12-07

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  8. Validation of life-charts documented with the personal life-chart app - a self-monitoring tool for bipolar disorder.

    PubMed

    Schärer, Lars O; Krienke, Ute J; Graf, Sandra-Mareike; Meltzer, Katharina; Langosch, Jens M

    2015-03-14

    Long-term monitoring in bipolar affective disorders constitutes an important therapeutic and preventive method. The present study examines the validity of the Personal Life-Chart App (PLC App), in both German and in English. This App is based on the National Institute of Mental Health's Life-Chart Method, the de facto standard for long-term monitoring in the treatment of bipolar disorders. Methods have largely been replicated from 2 previous Life-Chart studies. The participants documented Life-Charts with the PLC App on a daily basis. Clinicians assessed manic and depressive symptoms in clinical interviews using the Inventory of Depressive Symptomatology, clinician-rated (IDS-C) and the Young Mania Rating Scale (YMRS) on a monthly basis on average. Spearman correlations of the total scores of IDS-C and YMRS were calculated with both the Life-Chart functional impairment rating and mood rating documented with the PLC App. 44 subjects used the PLC App in German and 10 subjects used the PLC App in English. 118 clinical interviews from the German sub-sample and 97 from the English sub-sample were analysed separately. The results in both sub-samples are similar to previous Life-Chart validation studies. Again statistically significant high correlations were found between the Life-Chart function rating assigned through the PLC App and well-established observer-rated methods. Again correlations were weaker for the Life-Chart mood rating than for the Life-Chart function impairment. No relevant correlation was found between the Life-chart mood rating and YMRS in the German sub-sample. This study gives further evidence for the validity of the Life-Chart method as a valid tool for the recognition of both manic and depressive episodes. Documenting Life-Charts with the PLC App (English and German) does not seem to impair the validity of patient ratings.

  9. Airborne Bacteria in an Urban Environment

    PubMed Central

    Mancinelli, Rocco L.; Shulls, Wells A.

    1978-01-01

    Samples were taken at random intervals over a 2-year period from urban air and tested for viable bacteria. The number of bacteria in each sample was determined, and each organism isolated was identified by its morphological and biochemical characteristics. The number of bacteria found ranged from 0.013 to 1.88 organisms per liter of air sampled. Representatives of 19 different genera were found in 21 samples. The most frequently isolated organisms and their percent of occurence were Micrococcus (41%), Staphylococcus (11%), and Aerococcus (8%). The bacteria isolated were correlated with various weather and air pollution parameters using the Pearson product-moment correlation coefficient method. Statistically significant correlations were found between the number of viable bacteria isolated and the concentrations of nitric oxide (−0.45), nitrogen dioxide (+0.43), and suspended particulate pollutants (+0.56). Calculated individually, the total number of Micrococcus, Aerococcus, and Staphylococcus, number of rods, and number of cocci isolated showed negative correlations with nitric oxide and positive correlations with nitrogen dioxide and particulates. Statistically significant positive correlations were found between the total number of rods isolated and the concentration of nitrogen dioxide (+0.54) and the percent relative humidity (+0.43). The other parameters tested, sulfur dioxide, hydrocarbons, and temperature, showed no significant correlations. Images PMID:677875

  10. Stochastic multi-reference perturbation theory with application to the linearized coupled cluster method

    NASA Astrophysics Data System (ADS)

    Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali

    2017-01-01

    In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.

  11. Simultaneous regularization method for the determination of radius distributions from experimental multiangle correlation functions

    NASA Astrophysics Data System (ADS)

    Buttgereit, R.; Roths, T.; Honerkamp, J.; Aberle, L. B.

    2001-10-01

    Dynamic light scattering experiments have become a powerful tool in order to investigate the dynamical properties of complex fluids. In many applications in both soft matter research and industry so-called ``real world'' systems are subject of great interest. Here, the dilution of the investigated system often cannot be changed without getting measurement artifacts, so that one often has to deal with highly concentrated and turbid media. The investigation of such systems requires techniques that suppress the influence of multiple scattering, e.g., cross correlation techniques. However, measurements at turbid as well as highly diluted media lead to data with low signal-to-noise ratio, which complicates data analysis and leads to unreliable results. In this article a multiangle regularization method is discussed, which copes with the difficulties arising from such samples and enhances enormously the quality of the estimated solution. In order to demonstrate the efficiency of this multiangle regularization method we applied it to cross correlation functions measured at highly turbid samples.

  12. Dark Energy Survey Year 1 results: cross-correlation redshifts - methods and systematics characterization

    NASA Astrophysics Data System (ADS)

    Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.

    2018-06-01

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.

  13. Method for determining the octane rating of gasoline samples by observing corresponding acoustic resonances therein

    DOEpatents

    Sinha, Dipen N.; Anthony, Brian W.

    1997-01-01

    A method for determining the octane rating of gasoline samples by observing corresponding acoustic resonances therein. A direct correlation between the octane rating of gasoline and the frequency of corresponding acoustic resonances therein has been experimentally observed. Therefore, the octane rating of a gasoline sample can be directly determined through speed of sound measurements instead of by the cumbersome process of quantifying the knocking quality of the gasoline. Various receptacle geometries and construction materials may be employed. Moreover, it is anticipated that the measurements can be performed on flowing samples in pipes, thereby rendering the present method useful in refineries and distilleries.

  14. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  15. Resampling-based Methods in Single and Multiple Testing for Equality of Covariance/Correlation Matrices

    PubMed Central

    Yang, Yang; DeGruttola, Victor

    2016-01-01

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584

  16. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    PubMed

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  17. Evaluation of AUC(0-4) predictive methods for cyclosporine in kidney transplant patients.

    PubMed

    Aoyama, Takahiko; Matsumoto, Yoshiaki; Shimizu, Makiko; Fukuoka, Masamichi; Kimura, Toshimi; Kokubun, Hideya; Yoshida, Kazunari; Yago, Kazuo

    2005-05-01

    Cyclosporine (CyA) is the most commonly used immunosuppressive agent in patients who undergo kidney transplantation. Dosage adjustment of CyA is usually based on trough levels. Recently, trough levels have been replacing the area under the concentration-time curve during the first 4 h after CyA administration (AUC(0-4)). The aim of this study was to compare the predictive values obtained using three different methods of AUC(0-4) monitoring. AUC(0-4) was calculated from 0 to 4 h in early and stable renal transplant patients using the trapezoidal rule. The predicted AUC(0-4) was calculated using three different methods: the multiple regression equation reported by Uchida et al.; Bayesian estimation for modified population pharmacokinetic parameters reported by Yoshida et al.; and modified population pharmacokinetic parameters reported by Cremers et al. The predicted AUC(0-4) was assessed on the basis of predictive bias, precision, and correlation coefficient. The predicted AUC(0-4) values obtained using three methods through measurement of three blood samples showed small differences in predictive bias, precision, and correlation coefficient. In the prediction of AUC(0-4) measurement of one blood sample from stable renal transplant patients, the performance of the regression equation reported by Uchida depended on sampling time. On the other hand, the performance of Bayesian estimation with modified pharmacokinetic parameters reported by Yoshida through measurement of one blood sample, which is not dependent on sampling time, showed a small difference in the correlation coefficient. The prediction of AUC(0-4) using a regression equation required accurate sampling time. In this study, the prediction of AUC(0-4) using Bayesian estimation did not require accurate sampling time in the AUC(0-4) monitoring of CyA. Thus Bayesian estimation is assumed to be clinically useful in the dosage adjustment of CyA.

  18. A rapid analytical method for predicting the oxygen demand of wastewater.

    PubMed

    Fogelman, Shoshana; Zhao, Huijun; Blumenstein, Michael

    2006-11-01

    In this study, an investigation was undertaken to determine whether the predictive accuracy of an indirect, multiwavelength spectroscopic technique for rapidly determining oxygen demand (OD) values is affected by the use of unfiltered and turbid samples, as well as by the use of absorbance values measured below 200 nm. The rapid OD technique was developed that uses UV-Vis spectroscopy and artificial neural networks (ANNs) to indirectly determine chemical oxygen demand (COD) levels. It was found that the most accurate results were obtained when a spectral range of 190-350 nm was provided as data input to the ANN, and when using unfiltered samples below a turbidity range of 150 NTU. This is because high correlations of above 0.90 were obtained with the data using the standard COD method. This indicates that samples can be measured directly without the additional need for preprocessing by filtering. Samples with turbidity values higher than 150 NTU were found to produce poor correlations with the standard COD method, which made them unsuitable for accurate, real-time, on-line monitoring of OD levels.

  19. Predictive Validity of the Body Adiposity Index in Overweight and Obese Adults Using Dual-Energy X-ray Absorptiometry

    PubMed Central

    Ramírez-Vélez, Robinson; Correa-Bautista, Jorge Enrique; González-Ruíz, Katherine; Vivas, Andrés; García-Hermoso, Antonio; Triana-Reina, Hector Reynaldo

    2016-01-01

    The body adiposity index (BAI) is a recent anthropometric measure proven to be valid in predicting body fat percentage (BF%) in some populations. However, the results have been inconsistent across populations. This study was designed to verify the validity of BAI in predicting BF% in a sample of overweight/obese adults, using dual-energy X-ray absorptiometry (DEXA) as the reference method. A cross-sectional study was conducted in 48 participants (54% women, mean age 41.0 ± 7.3 years old). DEXA was used as the “gold standard” to determine BF%. Pearson’s correlation coefficient was used to evaluate the association between BAI and BF%, as assessed by DEXA. A paired sample t-test was used to test differences in mean BF% obtained with BAI and DEXA methods. To evaluate the concordance between BF% as measured by DEXA and as estimated by BAI, we used Lin’s concordance correlation coefficient and Bland–Altman agreement analysis. The correlation between BF% obtained by DEXA and that estimated by BAI was r = 0.844, p < 0.001. Paired t-test showed a significant mean difference in BF% between methods (BAI = 33.3 ± 6.2 vs. DEXA 39.0 ± 6.1; p < 0.001). The bias of the BAI was −6.0 ± 3.0 BF% (95% CI = −12.0 to 1.0), indicating that the BAI method significantly underestimated the BF% compared to the reference method. Lin’s concordance correlation coefficient was considered stronger (ρc = 0.923, 95% CI = 0.862 to 0.957). In obese adults, BAI presented low agreement with BF% measured by DEXA; therefore, BAI is not recommended for BF% prediction in this overweight/obese sample studied. PMID:27916871

  20. Air sampling procedures to evaluate microbial contamination: a comparison between active and passive methods in operating theatres.

    PubMed

    Napoli, Christian; Marcotrigiano, Vincenzo; Montagna, Maria Teresa

    2012-08-02

    Since air can play a central role as a reservoir for microorganisms, in controlled environments such as operating theatres regular microbial monitoring is useful to measure air quality and identify critical situations. The aim of this study is to assess microbial contamination levels in operating theatres using both an active and a passive sampling method and then to assess if there is a correlation between the results of the two different sampling methods. The study was performed in 32 turbulent air flow operating theatres of a University Hospital in Southern Italy. Active sampling was carried out using the Surface Air System and passive sampling with settle plates, in accordance with ISO 14698. The Total Viable Count (TVC) was evaluated at rest (in the morning before the beginning of surgical activity) and in operational (during surgery). The mean TVC at rest was 12.4 CFU/m3 and 722.5 CFU/m2/h for active and passive samplings respectively. The mean in operational TVC was 93.8 CFU/m3 (SD = 52.69; range = 22-256) and 10496.5 CFU/m2/h (SD = 7460.5; range = 1415.5-25479.7) for active and passive samplings respectively. Statistical analysis confirmed that the two methods correlate in a comparable way with the quality of air. It is possible to conclude that both methods can be used for general monitoring of air contamination, such as routine surveillance programs. However, the choice must be made between one or the other to obtain specific information.

  1. Using cross correlations to calibrate lensing source redshift distributions: Improving cosmological constraints from upcoming weak lensing surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Putter, Roland; Doré, Olivier; Das, Sudeep

    2014-01-10

    Cross correlations between the galaxy number density in a lensing source sample and that in an overlapping spectroscopic sample can in principle be used to calibrate the lensing source redshift distribution. In this paper, we study in detail to what extent this cross-correlation method can mitigate the loss of cosmological information in upcoming weak lensing surveys (combined with a cosmic microwave background prior) due to lack of knowledge of the source distribution. We consider a scenario where photometric redshifts are available and find that, unless the photometric redshift distribution p(z {sub ph}|z) is calibrated very accurately a priori (bias andmore » scatter known to ∼0.002 for, e.g., EUCLID), the additional constraint on p(z {sub ph}|z) from the cross-correlation technique to a large extent restores the cosmological information originally lost due to the uncertainty in dn/dz(z). Considering only the gain in photo-z accuracy and not the additional cosmological information, enhancements of the dark energy figure of merit of up to a factor of four (40) can be achieved for a SuMIRe-like (EUCLID-like) combination of lensing and redshift surveys, where SuMIRe stands for Subaru Measurement of Images and Redshifts). However, the success of the method is strongly sensitive to our knowledge of the galaxy bias evolution in the source sample and we find that a percent level bias prior is needed to optimize the gains from the cross-correlation method (i.e., to approach the cosmology constraints attainable if the bias was known exactly).« less

  2. Biochemical and nutritional components of selected honey samples.

    PubMed

    Chua, Lee Suan; Adnan, Nur Ardawati

    2014-01-01

    The purpose of this study was to investigate the relationship of biochemical (enzymes) and nutritional components in the selected honey samples from Malaysia. The relationship is important to estimate the quality of honey based on the concentration of these nutritious components. Such a study is limited for honey samples from tropical countries with heavy rainfall throughout the year. A number of six honey samples that commonly consumed by local people were collected for the study. Both the biochemical and nutritional components were analysed by using standard methods from Association of Official Analytical Chemists (AOAC). Individual monosaccharides, disaccharides and 17 amino acids in honey were determined by using liquid chromatographic method. The results showed that the peroxide activity was positively correlated with moisture content (r = 0.8264), but negatively correlated with carbohydrate content (r = 0.7755) in honey. The chromatographic sugar and free amino acid profiles showed that the honey samples could be clustered based on the type and maturity of honey. Proline explained for 64.9% of the total variance in principle component analysis (PCA). The correlation between honey components and honey quality has been established for the selected honey samples based on their biochemical and nutritional concentrations. PCA results revealed that the ratio of sucrose to maltose could be used to measure honey maturity, whereas proline was the marker compound used to distinguish honey either as floral or honeydew.

  3. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  4. Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Pando, Jesus

    1997-10-01

    The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)

  5. Comparison between Thermal Desorption Tubes and Stainless Steel Canisters Used for Measuring Volatile Organic Compounds in Petrochemical Factories.

    PubMed

    Chang, Cheng-Ping; Lin, Tser-Cheng; Lin, Yu-Wen; Hua, Yi-Chun; Chu, Wei-Ming; Lin, Tzu-Yu; Lin, Yi-Wen; Wu, Jyun-De

    2016-04-01

    The purpose of this study was to compare thermal desorption tubes and stainless steel canisters for measuring volatile organic compounds (VOCs) emitted from petrochemical factories. Twelve petrochemical factories in the Mailiao Industrial Complex were recruited for conducting the measurements of VOCs. Thermal desorption tubes and 6-l specially prepared stainless steel canisters were used to simultaneously perform active sampling of environmental air samples. The sampling time of the environmental air samples was set up on 6 h close to a full work shift of the workers. A total of 94 pairwise air samples were collected by using the thermal adsorption tubes and stainless steel canisters in these 12 factories in the petrochemical industrial complex. To maximize the number of comparative data points, all the measurements from all the factories in different sampling times were lumped together to perform a linear regression analysis for each selected VOC. Pearson product-moment correlation coefficient was used to examine the correlation between the pairwise measurements of these two sampling methods. A paired t-test was also performed to examine whether the difference in the concentrations of each selected VOC measured by the two methods was statistically significant. The correlation coefficients of seven compounds, including acetone, n-hexane, benzene, toluene, 1,2-dichloroethane, 1,3-butadiene, and styrene were >0.80 indicating the two sampling methods for these VOCs' measurements had high consistency. The paired t-tests for the measurements of n-hexane, benzene, m/p-xylene, o-xylene, 1,2-dichloroethane, and 1,3-butadiene showed statistically significant difference (P-value < 0.05). This indicated that the two sampling methods had various degrees of systematic errors. Looking at the results of six chemicals and these systematic errors probably resulted from the differences of the detection limits in the two sampling methods for these VOCs. The comparison between the concentrations of each of the 10 selected VOCs measured by the two sampling methods indicted that the thermal desorption tubes provided high accuracy and precision measurements for acetone, benzene, and 1,3-butadiene. The accuracy and precision of using the thermal desorption tubes for measuring the VOCs can be improved due to new developments in sorbent materials, multi-sorbent designs, and thermal desorption instrumentation. More applications of thermal desorption tubes for measuring occupational and environmental hazardous agents can be anticipated. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  7. Air sampling procedures to evaluate microbial contamination: a comparison between active and passive methods in operating theatres

    PubMed Central

    2012-01-01

    Background Since air can play a central role as a reservoir for microorganisms, in controlled environments such as operating theatres regular microbial monitoring is useful to measure air quality and identify critical situations. The aim of this study is to assess microbial contamination levels in operating theatres using both an active and a passive sampling method and then to assess if there is a correlation between the results of the two different sampling methods. Methods The study was performed in 32 turbulent air flow operating theatres of a University Hospital in Southern Italy. Active sampling was carried out using the Surface Air System and passive sampling with settle plates, in accordance with ISO 14698. The Total Viable Count (TVC) was evaluated at rest (in the morning before the beginning of surgical activity) and in operational (during surgery). Results The mean TVC at rest was 12.4 CFU/m3 and 722.5 CFU/m2/h for active and passive samplings respectively. The mean in operational TVC was 93.8 CFU/m3 (SD = 52.69; range = 22-256) and 10496.5 CFU/m2/h (SD = 7460.5; range = 1415.5-25479.7) for active and passive samplings respectively. Statistical analysis confirmed that the two methods correlate in a comparable way with the quality of air. Conclusion It is possible to conclude that both methods can be used for general monitoring of air contamination, such as routine surveillance programs. However, the choice must be made between one or the other to obtain specific information. PMID:22853006

  8. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  9. New definitions for cotton fiber maturity ratio

    USDA-ARS?s Scientific Manuscript database

    Cotton fiber maturity affects fiber physical, mechanical, and chemical properties, as well as the processability and qualities of yarn and fabrics. New definitions of cotton fiber maturity ratio are introduced. The influences of sampling, sample preparation, measurement method, and correlations am...

  10. Changes to Serum Sample Tube and Processing Methodology Does Not Cause Inter-Individual Variation in Automated Whole Serum N-Glycan Profiling in Health and Disease

    PubMed Central

    Shubhakar, Archana; Kalla, Rahul; Nimmo, Elaine R.; Fernandes, Daryl L.; Satsangi, Jack; Spencer, Daniel I. R.

    2015-01-01

    Introduction Serum N-glycans have been identified as putative biomarkers for numerous diseases. The impact of different serum sample tubes and processing methods on N-glycan analysis has received relatively little attention. This study aimed to determine the effect of different sample tubes and processing methods on the whole serum N-glycan profile in both health and disease. A secondary objective was to describe a robot automated N-glycan release, labeling and cleanup process for use in a biomarker discovery system. Methods 25 patients with active and quiescent inflammatory bowel disease and controls had three different serum sample tubes taken at the same draw. Two different processing methods were used for three types of tube (with and without gel-separation medium). Samples were randomised and processed in a blinded fashion. Whole serum N-glycan release, 2-aminobenzamide labeling and cleanup was automated using a Hamilton Microlab STARlet Liquid Handling robot. Samples were analysed using a hydrophilic interaction liquid chromatography/ethylene bridged hybrid(BEH) column on an ultra-high performance liquid chromatography instrument. Data were analysed quantitatively by pairwise correlation and hierarchical clustering using the area under each chromatogram peak. Qualitatively, a blinded assessor attempted to match chromatograms to each individual. Results There was small intra-individual variation in serum N-glycan profiles from samples collected using different sample processing methods. Intra-individual correlation coefficients were between 0.99 and 1. Unsupervised hierarchical clustering and principal coordinate analyses accurately matched samples from the same individual. Qualitative analysis demonstrated good chromatogram overlay and a blinded assessor was able to accurately match individuals based on chromatogram profile, regardless of disease status. Conclusions The three different serum sample tubes processed using the described methods cause minimal inter-individual variation in serum whole N-glycan profile when processed using an automated workstream. This has important implications for N-glycan biomarker discovery studies using different serum processing standard operating procedures. PMID:25831126

  11. Prevalence and Predictors of Sexual Assault among a College Sample

    ERIC Educational Resources Information Center

    Conley, A. H.; Overstreet, C. M.; Hawn, S. E.; Kendler, K. S.; Dick, D. M.; Amstadter, A. B.

    2017-01-01

    Objective: This study examined the prevalence and correlates of precollege, college-onset, and repeat sexual assault (SA) within a representative student sample. Participants: A representative sample of 7,603 students. Methods: Incoming first-year students completed a survey about their exposure to broad SA prior to college, prior trauma,…

  12. Correlation between Serum Levels of 3,3',5'-Triiodothyronine and Thyroid Hormones Measured by Liquid Chromatography-Tandem Mass Spectrometry and Immunoassay.

    PubMed

    Sakai, Hiroyuki; Nagao, Hidenori; Sakurai, Mamoru; Okumura, Takako; Nagai, Yoshiyuki; Shikuma, Junpei; Ito, Rokuro; Imazu, Tetsuya; Miwa, Takashi; Odawara, Masato

    2015-01-01

    For measuring serum 3,3',5'-triiodothyronine (rT3) levels, radioimmunoassay (RIA) has traditionally been used owing to the lack of other reliable methods; however, it has recently become difficult to perform. Meanwhile, liquid chromatography-tandem mass spectrometry (LC-MS/MS) has recently been attracting attention as a novel alternative method in clinical chemistry. To the best of our knowledge, there are no studies to date comparing results of the quantification of human serum rT3 between LC-MS/MS and RIA. We therefore examined the feasibility of LC-MS/MS as a novel alternative method for measuring serum rT3, thyroxine (T4), and 3,5,3'-triiodothyronine (T3) levels. Assay validation was performed by LC-MS/MS using quality control samples of rT3, T4, and T3 at 4 various concentrations which were prepared from reference compounds. Serum samples of 50 outpatients in our department were quantified both by LC-MS/MS and conventional immunoassay for rT3, T4, and T3. Correlation coefficients between the 2 measurement methods were statistically analyzed respectively. Matrix effects were not observed with our method. Intra-day and inter-day precisions were less than 10.8% and 9.6% for each analyte at each quality control level, respectively. Intra-day and inter-day accuracies were between 96.2% and 110%, and between 98.3% and 108.6%, respectively. The lower limit of quantification was 0.05 ng/mL. Strong correlations were observed between the 2 measurement methods (correlation coefficient, T4: 0.976, p < 0.001; T3: 0.912, p < 0.001; rT3: 0.928, p < 0.001). Our LC-MS/MS system requires no manual cleanup operation, and the process after application of a sample is fully automated; furthermore, it was found to be highly sensitive, and superior in both precision and accuracy. The correlation between the 2 methods over a wide range of concentrations was strong. LC-MS/MS is therefore expected to become a useful tool for clinical diagnosis and research.

  13. Comparison of high-pressure liquid chromatography (HPLC) and Griess reagent-spectroscopic methods for the measurement of nitrate in serum from healthy individuals in the Nordic countries.

    PubMed

    Larsen, Tine Lise; Nilsen, Valentina; Andersen, Dag Olav; Francis, George; Rustad, Pål; Mansoor, Mohammad Azam

    2008-12-01

    Bioavailability of NO can be estimated by measuring the concentration of nitrate (NO(3)) in serum. However, the methods used for the measurement NO(3) in plasma or serum show a great degree of variation. Therefore, we compared two analytical methods for the measurement of NO(3) in serum. The concentration of NO(3) in 600 serum samples collected from healthy individuals was determined by the HPLC and by the Griess reagent-spectroscopic method. The concentration of NO(3) in the samples was 29.4+/-16.1 micromol/L and 26.2+/-14.0 micromol/L (mean+/-SD) measured by HPLC and Griess reagent-spectroscopic method respectively (p<0.0001). We detected a significant correlation between the two methods (R=0.81, p<0.0001). A significant correlation between the two methods may suggest that either method can be used for the measurement of NO(3) in serum, however the Griess reagent-spectroscopic method measures lower concentrations of NO(3) than the HPLC method.

  14. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  15. Comparison of fine particle measurements from a direct-reading instrument and a gravimetric sampling method.

    PubMed

    Kim, Jee Young; Magari, Shannon R; Herrick, Robert F; Smith, Thomas J; Christiani, David C

    2004-11-01

    Particulate air pollution, specifically the fine particle fraction (PM2.5), has been associated with increased cardiopulmonary morbidity and mortality in general population studies. Occupational exposure to fine particulate matter can exceed ambient levels by a large factor. Due to increased interest in the health effects of particulate matter, many particle sampling methods have been developed In this study, two such measurement methods were used simultaneously and compared. PM2.5 was sampled using a filter-based gravimetric sampling method and a direct-reading instrument, the TSI Inc. model 8520 DUSTTRAK aerosol monitor. Both sampling methods were used to determine the PM2.5 exposure in a group of boilermakers exposed to welding fumes and residual fuel oil ash. The geometric mean PM2.5 concentration was 0.30 mg/m3 (GSD 3.25) and 0.31 mg/m3 (GSD 2.90)from the DUSTTRAK and gravimetric method, respectively. The Spearman rank correlation coefficient for the gravimetric and DUSTTRAK PM2.5 concentrations was 0.68. Linear regression models indicated that log, DUSTTRAK PM2.5 concentrations significantly predicted loge gravimetric PM2.5 concentrations (p < 0.01). The association between log(e) DUSTTRAK and log, gravimetric PM2.5 concentrations was found to be modified by surrogate measures for seasonal variation and type of aerosol. PM2.5 measurements from the DUSTTRAK are well correlated and highly predictive of measurements from the gravimetric sampling method for the aerosols in these work environments. However, results from this study suggest that aerosol particle characteristics may affect the relationship between the gravimetric and DUSTTRAK PM2.5 measurements. Recalibration of the DUSTTRAK for the specific aerosol, as recommended by the manufacturer, may be necessary to produce valid measures of airborne particulate matter.

  16. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  17. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  18. Application of selected methods of remote sensing for detecting carbonaceous water pollution

    NASA Technical Reports Server (NTRS)

    Davis, E. M.; Fosbury, W. J.

    1973-01-01

    A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.

  19. Rapid detection of Escherichia coli and enterococci in recreational water using an immunomagnetic separation/adenosine triphosphate technique

    USGS Publications Warehouse

    Bushon, R.N.; Brady, A.M.; Likirdopulos, C.A.; Cireddu, J.V.

    2009-01-01

    Aims: The aim of this study was to examine a rapid method for detecting Escherichia coli and enterococci in recreational water. Methods and Results: Water samples were assayed for E. coli and enterococci by traditional and immunomagnetic separation/adenosine triphosphate (IMS/ATP) methods. Three sample treatments were evaluated for the IMS/ATP method: double filtration, single filtration, and direct analysis. Pearson's correlation analysis showed strong, significant, linear relations between IMS/ATP and traditional methods for all sample treatments; strongest linear correlations were with the direct analysis (r = 0.62 and 0.77 for E. coli and enterococci, respectively). Additionally, simple linear regression was used to estimate bacteria concentrations as a function of IMS/ATP results. The correct classification of water-quality criteria was 67% for E. coli and 80% for enterococci. Conclusions: The IMS/ATP method is a viable alternative to traditional methods for faecal-indicator bacteria. Significance and Impact of the Study: The IMS/ATP method addresses critical public health needs for the rapid detection of faecal-indicator contamination and has potential for satisfying US legislative mandates requiring methods to detect bathing water contamination in 2 h or less. Moreover, IMS/ATP equipment is considerably less costly and more portable than that for molecular methods, making the method suitable for field applications. ?? 2009 The Authors.

  20. Windowed cross-correlation and peak picking for the analysis of variability in the association between behavioral time series.

    PubMed

    Boker, Steven M; Xu, Minquan; Rotondo, Jennifer L; King, Kadijah

    2002-09-01

    Cross-correlation and most other longitudinal analyses assume that the association between 2 variables is stationary. Thus, a sample of occasions of measurement is expected to be representative of the association between variables regardless of the time of onset or number of occasions in the sample. The authors propose a method to analyze the association between 2 variables when the assumption of stationarity may not be warranted. The method results in estimates of both the strength of peak association and the time lag when the peak association occurred for a range of starting values of elapsed time from the beginning of an experiment.

  1. Ring trial among National Reference Laboratories for parasites to detect Trichinella spiralis larvae in pork samples according to the EU directive 2075/2005.

    PubMed

    Marucci, Gianluca; Pezzotti, Patrizio; Pozio, Edoardo

    2009-02-23

    To control Trichinella spp. infection in the European Union, all slaughtered pigs should be tested by one of the approved digestion methods described in EU directive 2075/2005. The aim of the present work was to evaluate, by a ring trial, the sensitivity of the digestion method used at the National Reference Laboratories for Parasites (NRLP). These Laboratories are responsible for the quality of the detection method in their own country. Of the 27 EU countries, only three (Hungary, Luxembourg and Malta) did not participate in the ring trial. Each participating laboratory received 10 samples of 100g of minced pork containing 3-5 larvae (3 samples), 10-20 larvae (3 samples), 30-50 larvae (3 samples), and one negative control. In each positive sample, there were living Trichinella spiralis larvae without the collagen capsule, obtained by partial artificial digestion of muscle tissue from infected mice. No false positive sample was found in any laboratories, whereas nine laboratories (37.5%) failed to detect some positive samples with the percentage of false negatives ranging from 11 to 100%. The variation between expected and reported larval counts observed among the participating laboratories was statistically significant. There was a direct correlation between the consistency of the results and the use of a validated/accredited digestion method. Conversely, there was no correlation between the consistency of the results and the number of digestions performed yearly by the NRLP. These results support the importance of validating the test.

  2. Space shuttle nonmetallic materials age life prediction

    NASA Technical Reports Server (NTRS)

    Mendenhall, G. D.; Hassell, J. A.; Nathan, R. A.

    1975-01-01

    The chemiluminescence from samples of polybutadiene, Viton, Teflon, Silicone, PL 731 Adhesive, and SP 296 Boron-Epoxy composite was measured at temperatures from 25 to 150 C. Excellent correlations were obtained between chemiluminescence and temperature. These correlations serve to validate accelerated aging tests (at elevated temperatures) designed to predict service life at lower temperatures. In most cases, smooth or linear correlations were obtained between chemiluminescence and physical properties of purified polymer gums, including the tensile strength, viscosity, and loss tangent. The latter is a complex function of certain polymer properties. Data were obtained with far greater ease by the chemiluminescence technique than by the conventional methods of study. The chemiluminescence from the Teflon (Halon) samples was discovered to arise from trace amounts of impurities, which were undetectable by conventional, destructive analysis of the sample.

  3. Whole Brain Size and General Mental Ability: A Review

    PubMed Central

    Rushton, J. Philippe; Ankney, C. Davison

    2009-01-01

    We review the literature on the relation between whole brain size and general mental ability (GMA) both within and between species. Among humans, in 28 samples using brain imaging techniques, the mean brain size/GMA correlation is 0.40 (N = 1,389; p < 10−10); in 59 samples using external head size measures it is 0.20 (N = 63,405; p < 10−10). In 6 samples using the method of correlated vectors to distill g, the general factor of mental ability, the mean r is 0.63. We also describe the brain size/GMA correlations with age, socioeconomic position, sex, and ancestral population groups, which also provide information about brain–behavior relationships. Finally, we examine brain size and mental ability from an evolutionary and behavior genetic perspective. PMID:19283594

  4. Development and evaluation of an enzyme-linked immunosorbent assay (ELISA) method for the measurement of 2,4-dichlorophenoxyacetic acid in human urine.

    PubMed

    Chuang, Jane C; Emon, Jeanette M Van; Durnford, Joyce; Thomas, Kent

    2005-09-15

    An enzyme-linked immunosorbent assay (ELISA) method was developed to quantitatively measure 2,4-dichlorophenoxyacetic acid (2,4-D) in human urine. Samples were diluted (1:5) with phosphate-buffered saline containing 0.05% Tween and 0.02% sodium azide, with analysis by a 96-microwell plate immunoassay format. No clean up was required as dilution step minimized sample interferences. Fifty urine samples were received without identifiers from a subset of pesticide applicators and their spouses in an EPA pesticide exposure study (PES) and analyzed by the ELISA method and a conventional gas chromatography/mass spectrometry (GC/MS) procedure. For the GC/MS analysis, urine samples were extracted with acidic dichloromethane (DCM); methylated by diazomethane and fractionated by a Florisil solid phase extraction (SPE) column prior to GC/MS detection. The percent relative standard deviation (%R.S.D.) of the 96-microwell plate triplicate assays ranged from 1.2 to 22% for the urine samples. Day-to-day variation of the assay results was within +/-20%. Quantitative recoveries (>70%) of 2,4-D were obtained for the spiked urine samples by the ELISA method. Quantitative recoveries (>80%) of 2,4-D were also obtained for these samples by the GC/MS procedure. The overall method precision of these samples was within +/-20% for both the ELISA and GC/MS methods. The estimated quantification limit for 2,4-D in urine was 30ng/mL by ELISA and 0.2ng/mL by GC/MS. A higher quantification limit for the ELISA method is partly due to the requirement of a 1:5 dilution to remove the urine sample matrix effect. The GC/MS method can accommodate a 10:1 concentration factor (10mL of urine converted into 1mL organic solvent for analysis) but requires extraction, methylation and clean up on a solid phase column. The immunoassay and GC/MS data were highly correlated, with a correlation coefficient of 0.94 and a slope of 1.00. Favorable results between the two methods were achieved despite the vast differences in sample preparation. Results indicated that the ELISA method could be used as a high throughput, quantitative monitoring tool for human urine samples to identify individuals with exposure to 2,4-D above the typical background levels.

  5. Harmonic-phase path-integral approximation of thermal quantum correlation functions

    NASA Astrophysics Data System (ADS)

    Robertson, Christopher; Habershon, Scott

    2018-03-01

    We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.

  6. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  7. A geostatistical state-space model of animal densities for stream networks.

    PubMed

    Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H

    2018-06-21

    Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  9. Down-Regulation of Olfactory Receptors in Response to Traumatic Brain Injury Promotes Risk for Alzheimers Disease

    DTIC Science & Technology

    2015-12-01

    group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met- ric. For...differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as

  10. Population-based validation of a German version of the Brief Resilience Scale

    PubMed Central

    Wenzel, Mario; Stieglitz, Rolf-Dieter; Kunzler, Angela; Bagusat, Christiana; Helmreich, Isabella; Gerlicher, Anna; Kampa, Miriam; Kubiak, Thomas; Kalisch, Raffael; Lieb, Klaus; Tüscher, Oliver

    2018-01-01

    Smith and colleagues developed the Brief Resilience Scale (BRS) to assess the individual ability to recover from stress despite significant adversity. This study aimed to validate the German version of the BRS. We used data from a population-based (sample 1: n = 1.481) and a representative (sample 2: n = 1.128) sample of participants from the German general population (age ≥ 18) to assess reliability and validity. Confirmatory factor analyses (CFA) were conducted to compare one- and two-factorial models from previous studies with a method-factor model which especially accounts for the wording of the items. Reliability was analyzed. Convergent validity was measured by correlating BRS scores with mental health measures, coping, social support, and optimism. Reliability was good (α = .85, ω = .85 for both samples). The method-factor model showed excellent model fit (sample 1: χ2/df = 7.544; RMSEA = .07; CFI = .99; SRMR = .02; sample 2: χ2/df = 1.166; RMSEA = .01; CFI = 1.00; SRMR = .01) which was significantly better than the one-factor model (Δχ2(4) = 172.71, p < .001) or the two-factor model (Δχ2(3) = 31.16, p < .001). The BRS was positively correlated with well-being, social support, optimism, and the coping strategies active coping, positive reframing, acceptance, and humor. It was negatively correlated with somatic symptoms, anxiety and insomnia, social dysfunction, depression, and the coping strategies religion, denial, venting, substance use, and self-blame. To conclude, our results provide evidence for the reliability and validity of the German adaptation of the BRS as well as the unidimensional structure of the scale once method effects are accounted for. PMID:29438435

  11. Prevalence and Correlates of Self-Injury among University Students

    ERIC Educational Resources Information Center

    Gollust, Sarah Elizabeth; Eisenberg, Daniel; Golberstein, Ezra

    2008-01-01

    Objective: The authors' purpose in this research was to establish estimates of the prevalence and correlates of nonsuicidal self-injury among university students. Participants: The authors recruited participants (N = 2, 843) from a random sample of 5, 021 undergraduate and graduate students attending a large midwestern public university. Methods:…

  12. Correlates of AUDIT Risk Status for Male and Female College Students

    ERIC Educational Resources Information Center

    DeMartini, Kelly S.; Carey, Kate B.

    2009-01-01

    Objective: The current study identified gender-specific correlates of hazardous drinker status as defined by the AUDIT. Participants: A total of 462 college student volunteers completed the study in 2006. The sample was predominantly Caucasian (75%) and female (55%). Methods: Participants completed a survey assessing demographics, alcohol use…

  13. Determination of rivaroxaban in patient’s plasma samples by anti-Xa chromogenic test associated to High Performance Liquid Chromatography tandem Mass Spectrometry (HPLC-MS/MS)

    PubMed Central

    Derogis, Priscilla Bento Matos; Sanches, Livia Rentas; de Aranda, Valdir Fernandes; Colombini, Marjorie Paris; Mangueira, Cristóvão Luis Pitangueira; Katz, Marcelo; Faulhaber, Adriana Caschera Leme; Mendes, Claudio Ernesto Albers; Ferreira, Carlos Eduardo dos Santos; França, Carolina Nunes; Guerra, João Carlos de Campos

    2017-01-01

    Rivaroxaban is an oral direct factor Xa inhibitor, therapeutically indicated in the treatment of thromboembolic diseases. As other new oral anticoagulants, routine monitoring of rivaroxaban is not necessary, but important in some clinical circumstances. In our study a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was validated to measure rivaroxaban plasmatic concentration. Our method used a simple sample preparation, protein precipitation, and a fast chromatographic run. It was developed a precise and accurate method, with a linear range from 2 to 500 ng/mL, and a lower limit of quantification of 4 pg on column. The new method was compared to a reference method (anti-factor Xa activity) and both presented a good correlation (r = 0.98, p < 0.001). In addition, we validated hemolytic, icteric or lipemic plasma samples for rivaroxaban measurement by HPLC-MS/MS without interferences. The chromogenic and HPLC-MS/MS methods were highly correlated and should be used as clinical tools for drug monitoring. The method was applied successfully in a group of 49 real-life patients, which allowed an accurate determination of rivaroxaban in peak and trough levels. PMID:28170419

  14. Use of refractometry for determination of psittacine plasma protein concentration.

    PubMed

    Cray, Carolyn; Rodriguez, Marilyn; Arheart, Kristopher L

    2008-12-01

    Previous studies have demonstrated both poor and good correlation of total protein concentrations in various avian species using refractometry and biuret methodologies. The purpose of the current study was to compare these 2 techniques of total protein determination using plasma samples from several psittacine species and to determine the effect of cholesterol and other solutes on refractometry results. Total protein concentration in heparinized plasma samples without visible lipemia was analyzed by refractometry and an automated biuret method on a dry reagent analyzer (Ortho 250). Cholesterol, glucose, and uric acid concentrations were measured using the same analyzer. Results were compared using Deming regression analysis, Bland-Altman bias plots, and Spearman's rank correlation. Correlation coefficients (r) for total protein results by refractometry and biuret methods were 0.49 in African grey parrots (n=28), 0.77 in Amazon parrots (20), 0.57 in cockatiels (20), 0.73 in cockatoos (36), 0.86 in conures (20), and 0.93 in macaws (38) (P< or =.01). Cholesterol concentration, but not glucose or uric acid concentrations, was significantly correlated with total protein concentration obtained by refractometry in Amazon parrots, conures, and macaws (n=25 each, P<.05), and trended towards significance in African grey parrots and cockatoos (P=.06). Refractometry can be used to accurately measure total protein concentration in nonlipemic plasma samples from some psittacine species. Method and species-specific reference intervals should be used in the interpretation of total protein values.

  15. Relationship between dental calcification and skeletal maturation in a Peruvian sample

    PubMed Central

    Lecca-Morales, Rocío M.; Carruitero, Marcos J.

    2017-01-01

    ABSTRACT Objective: the objective of the study was to determine the relationship between dental calcification stages and skeletal maturation in a Peruvian sample. Methods: panoramic, cephalometric and carpal radiographs of 78 patients (34 girls and 44 boys) between 7 and 17 years old (9.90 ± 2.5 years) were evaluated. Stages of tooth calcification of the mandibular canine, first premolar, second premolar, and second molar and the skeletal maturation with a hand-wrist and a cervical vertebrae method were assessed. The relationships between the stages were assessed using Spearman’s correlation coefficient. Additionally, the associations of mandibular and pubertal growth peak stages with tooth calcification were evaluated by Fisher’s exact test. Results: all teeth showed positive and statistically significant correlations, the highest correlation was between the mandibular second molar calcification stages with hand-wrist maturation stages (r = 0.758, p < 0.001) and with vertebrae cervical maturation stages (r = 0.605, p < 0.001). The pubertal growth spurt was found in the G stage of calcification of the second mandibular molar, and the mandibular growth peak was found in the F stage of calcification of the second molar. Conclusion: there was a positive relationship between dental calcification stages and skeletal maturation stages by hand-wrist and cervical vertebrae methods in the sample studied. Dental calcification stages of the second mandibular molar showed the highest positive correlation with the hand-wrist and cervical vertebrae stages. PMID:28746492

  16. Differential porosimetry and permeametry for random porous media.

    PubMed

    Hilfer, R; Lemmer, A

    2015-07-01

    Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.

  17. Numerical methods for comparing fresh and weathered oils by their FTIR spectra.

    PubMed

    Li, Jianfeng; Hibbert, D Brynn; Fuller, Stephen

    2007-08-01

    Four comparison statistics ('similarity indices') for the identification of the source of a petroleum oil spill based on the ASTM standard test method D3414 were investigated. Namely, (1) first difference correlation coefficient squared and (2) correlation coefficient squared, (3) first difference Euclidean cosine squared and (4) Euclidean cosine squared. For numerical comparison, an FTIR spectrum is divided into three regions, described as: fingerprint (900-700 cm(-1)), generic (1350-900 cm(-1)) and supplementary (1770-1685 cm(-1)), which are the same as the three major regions recommended by the ASTM standard. For fresh oil samples, each similarity index was able to distinguish between replicate independent spectra of the same sample and between different samples. In general, the two first difference-based indices worked better than their parent indices. To provide samples to reveal relationships between weathered and fresh oils, a simple artificial weathering procedure was carried out. Euclidean cosine and correlation coefficients both worked well to maintain identification of a match in the fingerprint region and the two first difference indices were better in the generic region. Receiver operating characteristic curves (true positive rate versus false positive rate) for decisions on matching using the fingerprint region showed two samples could be matched when the difference in weathering time was up to 7 days. Beyond this time the true positive rate falls and samples cannot be reliably matched. However, artificial weathering of a fresh source sample can aid the matching of a weathered sample to its real source from a pool of very similar candidates.

  18. Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.

    PubMed

    Joost, P Houston; Riley, David G

    2004-08-01

    Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.

  19. Field Air Sampling and Simultaneous Chemical and Sensory Analysis of Livestock Odorants with Sorbent Tube GC-MS/Olfactometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Shicheng; Department of Environmental Science and Engineering, Fudan University, Shanghai 200433; Cai Lingshuang

    2009-05-23

    Characterization and quantification of livestock odorants is one of the most challenging analytical tasks because odor-causing gases are very reactive, polar and often present at very low concentrations in a complex matrix of less important or irrelevant gases. The objective of this research was to develop a novel analytical method for characterization of the livestock odorants including their odor character, odor intensity, and hedonic tone and to apply this method for quantitative analysis of the key odorants responsible for livestock odor. Sorbent tubes packed with Tenax TA were used for field sampling. The automated one-step thermal desorption module coupled withmore » multidimensional gas chromatography-mass spectrometry/olfactometry system was used for simultaneous chemical and odor analysis. Fifteen odorous VOCs and semi-VOCs identified from different livestock species operations were quantified. Method detection limits ranges from 40 pg for skatole to 3590 pg for acetic acid. In addition, odor character, odor intensity and hedonic tone associated with each of the target odorants are also analyzed simultaneously. We found that the mass of each VOCs in the sample correlates well with the log stimulus intensity. All of the correlation coefficients (R{sup 2}) are greater than 0.74, and the top 10 correlation coefficients were greater than 0.90.« less

  20. Method for determining the octane rating of gasoline samples by observing corresponding acoustic resonances therein

    DOEpatents

    Sinha, D.N.; Anthony, B.W.

    1997-02-25

    A method is described for determining the octane rating of gasoline samples by observing corresponding acoustic resonances therein. A direct correlation between the octane rating of gasoline and the frequency of corresponding acoustic resonances therein has been experimentally observed. Therefore, the octane rating of a gasoline sample can be directly determined through speed of sound measurements instead of by the cumbersome process of quantifying the knocking quality of the gasoline. Various receptacle geometries and construction materials may be employed. Moreover, it is anticipated that the measurements can be performed on flowing samples in pipes, thereby rendering the present method useful in refineries and distilleries. 3 figs.

  1. The correlation of methylation levels measured using Illumina 450K and EPIC BeadChips in blood samples

    PubMed Central

    Logue, Mark W; Smith, Alicia K; Wolf, Erika J; Maniates, Hannah; Stone, Annjanette; Schichman, Steven A; McGlinchey, Regina E; Milberg, William; Miller, Mark W

    2017-01-01

    Aim: We examined concordance of methylation levels across the Illumina Infinium HumanMethylation450 BeadChip and the Infinium MethylationEPIC BeadChip. Methods: We computed the correlation for 145 whole blood DNA samples at each of the 422,524 CpG sites measured by both chips. Results: The correlation at some sites was high (up to r = 0.95), but many sites had low correlation (55% had r < 0.20). The low correspondence between 450K and EPIC measured methylation values at many loci was largely due to the low variability in methylation values for the majority of the CpG sites in blood. Conclusion: Filtering out probes based on the observed correlation or low variability may increase reproducibility of BeadChip-based epidemiological studies. PMID:28809127

  2. Rapid classification of hairtail fish and pork freshness using an electronic nose based on the PCA method.

    PubMed

    Tian, Xiu-Ying; Cai, Qiang; Zhang, Yong-Ming

    2012-01-01

    We report a method for building a simple and reproducible electronic nose based on commercially available metal oxide sensors (MOS) to monitor the freshness of hairtail fish and pork stored at 15, 10, and 5 °C. After assembly in the laboratory, the proposed product was tested by a manufacturer. Sample delivery was based on the dynamic headspace method, and two features were extracted from the transient response of each sensor using an unsupervised principal component analysis (PCA) method. The compensation method and pattern recognition based on PCA are discussed in the current paper. PCA compensation can be used for all storage temperatures, however, pattern recognition differs according to storage conditions. Total volatile basic nitrogen (TVBN) and aerobic bacterial counts of the samples were measured simultaneously with the standard indicators of hairtail fish and pork freshness. The PCA models based on TVBN and aerobic bacterial counts were used to classify hairtail fish samples as "fresh" (TVBN ≤ 25 g and microbial counts ≤ 10(6) cfu/g) or "spoiled" (TVBN ≥ 25 g and microbial counts ≥ 10(6) cfu/g) and pork samples also as "fresh" (TVBN ≤ 15 g and microbial counts ≤ 10(6) cfu/g) or "spoiled" (TVBN ≥ 15 g and microbial counts ≥ 10(6) cfu/g). Good correlation coefficients between the responses of the electronic nose and the TVBN and aerobic bacterial counts of the samples were obtained. For hairtail fish, correlation coefficients were 0.97 and 0.91, and for pork, correlation coefficients were 0.81 and 0.88, respectively. Through laboratory simulation and field application, we were able to determine that the electronic nose could help ensure the shelf life of hairtail fish and pork, especially when an instrument is needed to take measurements rapidly. The results also showed that the electronic nose could analyze the process and level of spoilage for hairtail fish and pork.

  3. Comparison of urine analysis using manual and sedimentation methods.

    PubMed

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  4. Excited-State Effective Masses in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Fleming, Saul Cohen, Huey-Wen Lin

    2009-10-01

    We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.

  5. Oral sampling methods are associated with differences in immune marker concentrations.

    PubMed

    Fakhry, Carole; Qeadan, Fares; Gilman, Robert H; Yori, Pablo; Kosek, Margaret; Patterson, Nicole; Eisele, David W; Gourin, Christine G; Chitguppi, Chandala; Marks, Morgan; Gravitt, Patti

    2018-06-01

    To determine whether the concentration and distribution of immune markers in paired oral samples were similar. Clinical research. Cross-sectional study. Paired saliva and oral secretions (OS) samples were collected. The concentration of immune markers was estimated using Luminex multiplex assay (Thermo Fisher Scientific, Waltham, MA). For each sample, the concentration of respective immune markers was normalized to total protein present and log-transformed. Median concentrations of immune markers were compared between both types of samples. Intermarker correlation in each sampling method and across sampling methods was evaluated. There were 90 study participants. Concentrations of immune markers in saliva samples were significantly different from concentrations in OS samples. Oral secretions samples showed higher concentrations of immunoregulatory markers, whereas the saliva samples contained proinflammatory markers in higher concentration. The immune marker profile in saliva samples is distinct from the immune marker profile in paired OS samples. 2b. Laryngoscope, 128:E214-E221, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Identifying presence of correlated errors in GRACE monthly harmonic coefficients using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.

    2017-04-01

    A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of the trained algorithms to classify independent coefficients. This accuracy is also confirmed by the external validation of the trained algorithms using the hydrology model GLDAS NOAH. The proposed method meet the requirement of identifying and de-correlating only coefficients with correlated errors. Also, there is no need of applying statistical testing or other techniques that require prior de-correlation of the harmonic coefficients.

  7. Correlation between X-ray flux and rotational acceleration in Vela X-1

    NASA Technical Reports Server (NTRS)

    Deeter, J. E.; Boynton, P. E.; Shibazaki, N.; Hayakawa, S.; Nagase, F.

    1989-01-01

    The results of a search for correlations between X-ray flux and angular acceleration for the accreting binary pulsar Vela X-1 are presented. Results are based on data obtained with the Hakucho satellite during the interval 1982 to 1984. In undertaking this correlation analysis, it was necessary to modify the usual statistical method to deal with conditions imposed by generally unavoidable satellite observing constraints, most notably a mismatch in sampling between the two variables. The results are suggestive of a correlation between flux and the absolute value of the angular acceleration, at a significance level of 96 percent. The implications of the methods and results for future observations and analysis are discussed.

  8. Exposure to microbial components and allergens in population studies: a comparison of two house dust collection methods applied by participants and fieldworkers.

    PubMed

    Schram-Bijkerk, D; Doekes, G; Boeve, M; Douwes, J; Riedler, J; Ublagger, E; von Mutius, E; Benz, M; Pershagen, G; Wickman, M; Alfvén, T; Braun-Fahrländer, C; Waser, M; Brunekreef, B

    2006-12-01

    Dust collection by study participants instead of fieldworkers would be a practical and cost-effective alternative in large-scale population studies estimating exposure to indoor allergens and microbial agents. We aimed to compare dust weights and biological agent levels in house dust samples taken by study participants with nylon socks, with those in samples taken by fieldworkers using the sampling nozzle of the Allergology Laboratory Copenhagen (ALK). In homes of 216 children, parents and fieldworkers collected house dust within the same year. Dust samples were analyzed for levels of allergens, endotoxin, (1-->3)-beta-D-glucans and fungal extracellular polysaccharides (EPS). Socks appeared to yield less dust from mattresses at relatively low dust amounts and more dust at high dust amounts than ALK samples. Correlations between the methods ranged from 0.47-0.64 for microbial agents and 0.64-0.87 for mite and pet allergens. Cat allergen levels were two-fold lower and endotoxin levels three-fold higher in socks than in ALK samples. Levels of allergens and microbial agents in sock samples taken by study participants are moderately to highly correlated to levels in ALK samples taken by fieldworkers. Absolute levels may differ, probably because of differences in the method rather than in the person who performed the sampling. Practical Implications Dust collection by participants is a reliable and practical option for allergen and microbial agent exposure assessment. Absolute levels of biological agents are not (always) comparable between studies using different dust collection methods, even when expressed per gram dust, because of potential differences in particle-size constitution of the collected dust.

  9. Microdialysis Monitoring of CSF Parameters in Severe Traumatic Brain Injury Patients: A Novel Approach

    PubMed Central

    Thelin, Eric P.; Nelson, David W.; Ghatan, Per Hamid; Bellander, Bo-Michael

    2014-01-01

    Background: Neuro-intensive care following traumatic brain injury (TBI) is focused on preventing secondary insults that may lead to irreversible brain damage. Microdialysis (MD) is used to detect deranged cerebral metabolism. The clinical usefulness of the MD is dependent on the regional localization of the MD catheter. The aim of this study was to analyze a new method of continuous cerebrospinal fluid (CSF) monitoring using the MD technique. The method was validated using conventional laboratory analysis of CSF samples. MD-CSF and regional MD-Brain samples were correlated to patient outcome. Materials and Methods: A total of 14 patients suffering from severe TBI were analyzed. They were monitored using (1) a MD catheter (CMA64-iView, n = 7448 MD samples) located in a CSF-pump connected to the ventricular drain and (2) an intraparenchymal MD catheter (CMA70, n = 8358 MD samples). CSF-lactate and CSF-glucose levels were monitored and were compared to MD-CSF samples. MD-CSF and MD-Brain parameters were correlated to favorable (Glasgow Outcome Score extended, GOSe 6–8) and unfavorable (GOSe 1–5) outcome. Results: Levels of glucose and lactate acquired with the CSF-MD technique could be correlated to conventional levels. The median MD recovery using the CMA64 catheter in CSF was 0.98 and 0.97 for glucose and lactate, respectively. Median MD-CSF (CMA 64) lactate (p = 0.0057) and pyruvate (p = 0.0011) levels were significantly lower in the favorable outcome group compared to the unfavorable group. No significant difference in outcome was found using the lactate:pyruvate ratio (LPR), or any of the regional MD-Brain monitoring in our analyzed cohort. Conclusion: This new technique of global MD-CSF monitoring correlates with conventional CSF levels of glucose and lactate, and the MD recovery is higher than previously described. Increase in lactate and pyruvate, without any effect on the LPR, correlates to unfavorable outcome, perhaps related to the presence of erythrocytes in the CSF. PMID:25228896

  10. Cross correlation measurement of low frequency conductivity noise

    NASA Astrophysics Data System (ADS)

    Jain, Aditya Kumar; Nigudkar, Himanshu; Chakraborti, Himadri; Udupa, Aditi; Gupta, Kantimay Das

    2018-04-01

    In order to study the low frequency noise(1/f noise)an experimental technique based on cross correlation of two channels is presented. In this method the device under test (DUT)is connected to the two independently powered preamplifiers in parallel. The amplified signals from the two preamplifiers are fed to two channels of a digitizer. Subsequent data processing largelyeliminates the uncorrelated noise of the two channels. This method is tested for various commercial carbon/metal film resistors by measuring equilibrium thermal noise (4kBTR). The method is then modified to study the non-equilibrium low frequency noise of heterostructure samples using fiveprobe configuration. Five contact probes allow two parts of the sample to become two arms of a balanced bridge. This configuration helps in suppressing the effect of power supply fluctuations, bath temperature fluctuations and contact resistances.

  11. A MIMO radar quadrature and multi-channel amplitude-phase error combined correction method based on cross-correlation

    NASA Astrophysics Data System (ADS)

    Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan

    2018-04-01

    Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.

  12. Association between tumour infiltrating lymphocytes, histotype and clinical outcome in epithelial ovarian cancer.

    PubMed

    James, Fiona R; Jiminez-Linan, Mercedes; Alsop, Jennifer; Mack, Marie; Song, Honglin; Brenton, James D; Pharoah, Paul D P; Ali, H Raza

    2017-09-20

    There is evidence that some ovarian tumours evoke an immune response, which can be assessed by tumour infiltrating lymphocytes (TILs). To facilitate adoption of TILs as a clinical biomarker, a standardised method for their H&E visual evaluation has been validated in breast cancer. We sought to investigate the prognostic significance of TILs in a study of 953 invasive epithelial ovarian cancer tumour samples, both primary and metastatic, from 707 patients from the prospective population-based SEARCH study. TILs were analysed using a standardised method based on H&E staining producing a percentage score for stromal and intratumoral compartments. We used Cox regression to estimate hazard ratios of the association between TILs and survival. The extent of stromal and intra-tumoral TILs were correlated in the primary tumours (n = 679, Spearman's rank correlation = 0.60, P < 0.001) with a similar correlation in secondary tumours (n = 224, Spearman's rank correlation = 0.62, P < 0.001). There was a weak correlation between stromal TIL levels in primary and secondary tumour samples (Spearman's rank correlation = 0.29, P < 0.001) and intra-tumoral TIL levels in primary and secondary tumour samples (Spearman's rank correlation = 0.19, P = 0.0094). The extent of stromal TILs differed between histotypes (Pearson chi2 (12d.f.) 54.1, P < 0.0001) with higher levels of stromal infiltration in the high-grade serous and endometriod cases. A significant association was observed for higher intratumoral TIL levels and a favourable prognosis (HR 0.74 95% CI 0.55-1.00 p = 0.047). This study is the largest collection of epithelial ovarian tumour samples evaluated for TILs. We have shown that stromal and intratumoral TIL levels are correlated and that their levels correlate with clinical variables such as tumour histological subtype. We have also shown that increased levels of both intratumoral and stromal TILs are associated with a better prognosis; however, this is only statistically significant for intratumoral TILs. This study suggests that a clinically useful immune prognostic indicator in epithelial ovarian cancer could be developed using this technique.

  13. Common methods for fecal sample storage in field studies yield consistent signatures of individual identity in microbiome sequencing data.

    PubMed

    Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny

    2016-08-16

    Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.

  14. Assessment of Mudrock Brittleness with Micro-scratch Testing

    NASA Astrophysics Data System (ADS)

    Hernandez-Uribe, Luis Alberto; Aman, Michael; Espinoza, D. Nicolas

    2017-11-01

    Mechanical properties are essential for understanding natural and induced deformational behavior of geological formations. Brittleness characterizes energy dissipation rate and strain localization at failure. Brittleness has been investigated in hydrocarbon-bearing mudrocks in order to quantify the impact of hydraulic fracturing on the creation of complex fracture networks and surface area for reservoir drainage. Typical well logging correlations associate brittleness with carbonate content or dynamic elastic properties. However, an index of rock brittleness should involve actual rock failure and have a consistent method to quantify it. Here, we present a systematic method to quantify mudrock brittleness based on micro-mechanical measurements from the scratch test. Brittleness is formulated as the ratio of energy associated with brittle failure to the total energy required to perform a scratch. Soda lime glass and polycarbonate are used for comparison to identify failure in brittle and ductile mode and validate the developed method. Scratch testing results on mudrocks indicate that it is possible to use the recorded transverse force to estimate brittleness. Results show that tested samples rank as follows in increasing degree of brittleness: Woodford, Eagle Ford, Marcellus, Mancos, and Vaca Muerta. Eagle Ford samples show mixed ductile/brittle failure characteristics. There appears to be no definite correlation between micro-scratch brittleness and quartz or total carbonate content. Dolomite content shows a stronger correlation with brittleness than any other major mineral group. The scratch brittleness index correlates positively with increasing Young's modulus and decreasing Poisson's ratio, but shows deviations in rocks with distinct porosity and with stress-sensitive brittle/ductile behavior (Eagle Ford). The results of our study demonstrate that the micro-scratch test method can be used to investigate mudrock brittleness. The method is particularly useful for reservoir characterization methods that take advantage of drill cuttings or whenever large samples for triaxial testing or fracture mechanics testing cannot be recovered.

  15. Principle component analysis (PCA) for investigation of relationship between population dynamics of microbial pathogenesis, chemical and sensory characteristics in beef slices containing Tarragon essential oil.

    PubMed

    Alizadeh Behbahani, Behrooz; Tabatabaei Yazdi, Farideh; Shahidi, Fakhri; Mortazavi, Seyed Ali; Mohebbi, Mohebbat

    2017-04-01

    Principle component analysis (PCA) was employed to examine the effect of the exerted treatments on the beef shelf life as well as discovering the correlations between the studied responses. Considering the variability of the dimensions of the responses, correlation coefficients were applied to form the matrix and extract the eigenvalue. Antimicrobial effect was evaluated on 10 pathogenic microorganisms through the methods of hole-plate diffusion method, disk diffusion method, pour plate method, minimum inhibitory concentration and minimum bactericidal/fungicidal concentration. Antioxidant potential and total phenolic content were examined through the method of 2,2-diphenyl-1-picrylhydrazyl (DPPH) and Folin-Ciocalteu method, respectively. The components were identified through gas chromatography and gas chromatography/mass spectrometry. Barhang seed mucilage (BSM) based edible coating containing 0, 0.5, 1 and 1.5% (w/w) Tarragon (T) essential oil mix were applied on beef slices to control the growth of pathogenic microorganisms. Microbiological (total viable count, psychrotrophic count, Escherichia coli, Staphylococcus aureus and fungi), chemical (thiobarbituric acid, peroxide value and pH) and sensory characteristics (odor, color and overall acceptability) analysis measurements were made during the storage periodically. PCA was employed to examine the effect of the exerted treatments on the beef shelf life as well as discovering the correlations between the studied responses. Considering the variability of the dimensions of the responses, correlation coefficients were applied to form the matrix and extract the eigenvalue. The PCA showed that the properties of the uncoated meat samples on the 9th, 12th, 15th and 18th days of storage are continuously changing independent of the exerted treatments on the other samples. This reveals the effect of the exerted treatments on the samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Gamma-Ray Attenuation to Evaluate Soil Porosity: An Analysis of Methods

    PubMed Central

    Pires, Luiz F.; Pereira, André B.

    2014-01-01

    Soil porosity (ϕ) is of a great deal for environmental studies due to the fact that water infiltrates and suffers redistribution in the soil pore space. Many physical and biochemical processes related to environmental quality occur in the soil porous system. Representative determinations of ϕ are necessary due to the importance of this physical property in several fields of natural sciences. In the current work, two methods to evaluate ϕ were analyzed by means of gamma-ray attenuation technique. The first method uses the soil attenuation approach through dry soil and saturated samples, whereas the second one utilizes the same approach but taking into account dry soil samples to assess soil bulk density and soil particle density to determine ϕ. The results obtained point out a good correlation between both methods. However, when ϕ is obtained through soil water content at saturation and a 4 mm collimator is used to collimate the gamma-ray beam the first method also shows good correlations with the traditional one. PMID:24616640

  17. Quantification of ethanol in plasma by electrochemical detection with an unmodified screen printed carbon electrode

    NASA Astrophysics Data System (ADS)

    Tian, Gang; Zhang, Xiao-Qing; Zhu, Ming-Song; Zhang, Zhong; Shi, Zheng-Hu; Ding, Min

    2016-03-01

    Simple, rapid and accurate detection of ethanol concentration in blood is very crucial in the diagnosis and management of potential acute ethanol intoxication patients. A novel electrochemical detection method was developed for the quantification of ethanol in human plasma with disposable unmodified screen-printed carbon electrode (SPCE) without sample preparation procedure. Ethanol was detected indirectly by the reaction product of ethanol dehydrogenase (ADH) and cofactor nicotinamide adenine dinucleotide (NAD+). Method validation indicated good quantitation precisions with intra-day and inter-day relative standard deviations of ≤9.4% and 8.0%, respectively. Ethanol concentration in plasma is linear ranging from 0.10 to 3.20 mg/mL, and the detection limit is 40.0 μg/mL (S/N > 3). The method shows satisfactory correlation with the reference method of headspace gas chromatography in twenty human plasma samples (correlation coefficient 0.9311). The proposed method could be applied to diagnose acute ethanol toxicity or ethanol-related death.

  18. Quantification of ethanol in plasma by electrochemical detection with an unmodified screen printed carbon electrode

    PubMed Central

    Tian, Gang; Zhang, Xiao-Qing; Zhu, Ming-Song; Zhang, Zhong; Shi, Zheng-Hu; Ding, Min

    2016-01-01

    Simple, rapid and accurate detection of ethanol concentration in blood is very crucial in the diagnosis and management of potential acute ethanol intoxication patients. A novel electrochemical detection method was developed for the quantification of ethanol in human plasma with disposable unmodified screen-printed carbon electrode (SPCE) without sample preparation procedure. Ethanol was detected indirectly by the reaction product of ethanol dehydrogenase (ADH) and cofactor nicotinamide adenine dinucleotide (NAD+). Method validation indicated good quantitation precisions with intra-day and inter-day relative standard deviations of ≤9.4% and 8.0%, respectively. Ethanol concentration in plasma is linear ranging from 0.10 to 3.20 mg/mL, and the detection limit is 40.0 μg/mL (S/N > 3). The method shows satisfactory correlation with the reference method of headspace gas chromatography in twenty human plasma samples (correlation coefficient 0.9311). The proposed method could be applied to diagnose acute ethanol toxicity or ethanol-related death. PMID:27006081

  19. Changes to serum sample tube and processing methodology does not cause Intra-Individual [corrected] variation in automated whole serum N-glycan profiling in health and disease.

    PubMed

    Ventham, Nicholas T; Gardner, Richard A; Kennedy, Nicholas A; Shubhakar, Archana; Kalla, Rahul; Nimmo, Elaine R; Fernandes, Daryl L; Satsangi, Jack; Spencer, Daniel I R

    2015-01-01

    Serum N-glycans have been identified as putative biomarkers for numerous diseases. The impact of different serum sample tubes and processing methods on N-glycan analysis has received relatively little attention. This study aimed to determine the effect of different sample tubes and processing methods on the whole serum N-glycan profile in both health and disease. A secondary objective was to describe a robot automated N-glycan release, labeling and cleanup process for use in a biomarker discovery system. 25 patients with active and quiescent inflammatory bowel disease and controls had three different serum sample tubes taken at the same draw. Two different processing methods were used for three types of tube (with and without gel-separation medium). Samples were randomised and processed in a blinded fashion. Whole serum N-glycan release, 2-aminobenzamide labeling and cleanup was automated using a Hamilton Microlab STARlet Liquid Handling robot. Samples were analysed using a hydrophilic interaction liquid chromatography/ethylene bridged hybrid(BEH) column on an ultra-high performance liquid chromatography instrument. Data were analysed quantitatively by pairwise correlation and hierarchical clustering using the area under each chromatogram peak. Qualitatively, a blinded assessor attempted to match chromatograms to each individual. There was small intra-individual variation in serum N-glycan profiles from samples collected using different sample processing methods. Intra-individual correlation coefficients were between 0.99 and 1. Unsupervised hierarchical clustering and principal coordinate analyses accurately matched samples from the same individual. Qualitative analysis demonstrated good chromatogram overlay and a blinded assessor was able to accurately match individuals based on chromatogram profile, regardless of disease status. The three different serum sample tubes processed using the described methods cause minimal inter-individual variation in serum whole N-glycan profile when processed using an automated workstream. This has important implications for N-glycan biomarker discovery studies using different serum processing standard operating procedures.

  20. Methods for presentation and display of multivariate data

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Methods for the presentation and display of multivariate data are discussed with emphasis placed on the multivariate analysis of variance problems and the Hotelling T(2) solution in the two-sample case. The methods utilize the concepts of stepwise discrimination analysis and the computation of partial correlation coefficients.

  1. An enzyme-linked immunosorbent assay for the determination of dioxins in contaminated sediment and soil samples

    PubMed Central

    Van Emon, Jeanette M.; Chuang, Jane C.; Lordo, Robert A.; Schrock, Mary E.; Nichkova, Mikaela; Gee, Shirley J.; Hammock, Bruce D.

    2010-01-01

    A 96-microwell enzyme-linked immunosorbent assay (ELISA) method was evaluated to determine PCDDs/PCDFs in sediment and soil samples from an EPA Superfund site. Samples were prepared and analyzed by both the ELISA and a gas chromatography/high resolution mass spectrometry (GC/HRMS) method. Comparable method precision, accuracy, and detection level (8 ng kg−1) were achieved by the ELISA method with respect to GC/HRMS. However, the extraction and cleanup method developed for the ELISA requires refinement for the soil type that yielded a waxy residue after sample processing. Four types of statistical analyses (Pearson correlation coefficient, paired t-test, nonparametric tests, and McNemar’s test of association) were performed to determine whether the two methods produced statistically different results. The log-transformed ELISA-derived 2,3,7,8-tetrachlorodibenzo-p-dioxin values and logtransformed GC/HRMS-derived TEQ values were significantly correlated (r = 0.79) at the 0.05 level. The median difference in values between ELISA and GC/HRMS was not significant at the 0.05 level. Low false negative and false positive rates (<10%) were observed for the ELISA when compared to the GC/HRMS at 1000 ng TEQ kg−1. The findings suggest that immunochemical technology could be a complementary monitoring tool for determining concentrations at the 1000 ng TEQ kg−1 action level for contaminated sediment and soil. The ELISA could also be used in an analytical triage approach to screen and rank samples prior to instrumental analysis. PMID:18313102

  2. 25OHD analogues and vacuum blood collection tubes dramatically affect the accuracy of automated immunoassays

    PubMed Central

    Yu, Songlin; Cheng, Xinqi; Fang, Huiling; Zhang, Ruiping; Han, Jianhua; Qin, Xuzhen; Cheng, Qian; Su, Wei; Hou, Li’an; Xia, Liangyu; Qiu, Ling

    2015-01-01

    Variations in vitamin D quantification methods are large, and influences of vitamin D analogues and blood collection methods have not been systematically examined. We evaluated the effects of vitamin D analogues 25OHD2 and 3-epi 25OHD3 and blood collection methods on vitamin D measurement, using five immunoassay systems and liquid chromatography-tandem mass spectrometry (LC-MS/MS). Serum samples (332) were selected from routine vitamin D assay requests, including samples with or without 25OHD2 or 3-epi 25OHD3, and analysed using various immunoassay systems. In samples with no 25OHD2 or 3-epi 25OHD3, all immunoassays correlated well with LC-MS/MS. However, the Siemens system produced a large positive mean bias of 12.5 ng/mL and a poor Kappa value when using tubes with clot activator and gel separator. When 25OHD2 or 3-epi 25OHD3 was present, correlations and clinical agreement decreased for all immunoassays. Serum 25OHD in VACUETTE tubes with gel and clot activator, as measured by the Siemens system, produced significantly higher values than did samples collected in VACUETTE tubes with no additives. Bias decreased and clinical agreement improved significantly when using tubes with no additives. In conclusion, most automated immunoassays showed acceptable correlation and agreement with LC-MS/MS; however, 25OHD analogues and blood collection tubes dramatically affected accuracy. PMID:26420221

  3. 25OHD analogues and vacuum blood collection tubes dramatically affect the accuracy of automated immunoassays.

    PubMed

    Yu, Songlin; Cheng, Xinqi; Fang, Huiling; Zhang, Ruiping; Han, Jianhua; Qin, Xuzhen; Cheng, Qian; Su, Wei; Hou, Li'an; Xia, Liangyu; Qiu, Ling

    2015-09-30

    Variations in vitamin D quantification methods are large, and influences of vitamin D analogues and blood collection methods have not been systematically examined. We evaluated the effects of vitamin D analogues 25OHD2 and 3-epi 25OHD3 and blood collection methods on vitamin D measurement, using five immunoassay systems and liquid chromatography-tandem mass spectrometry (LC-MS/MS). Serum samples (332) were selected from routine vitamin D assay requests, including samples with or without 25OHD2 or 3-epi 25OHD3, and analysed using various immunoassay systems. In samples with no 25OHD2 or 3-epi 25OHD3, all immunoassays correlated well with LC-MS/MS. However, the Siemens system produced a large positive mean bias of 12.5 ng/mL and a poor Kappa value when using tubes with clot activator and gel separator. When 25OHD2 or 3-epi 25OHD3 was present, correlations and clinical agreement decreased for all immunoassays. Serum 25OHD in VACUETTE tubes with gel and clot activator, as measured by the Siemens system, produced significantly higher values than did samples collected in VACUETTE tubes with no additives. Bias decreased and clinical agreement improved significantly when using tubes with no additives. In conclusion, most automated immunoassays showed acceptable correlation and agreement with LC-MS/MS; however, 25OHD analogues and blood collection tubes dramatically affected accuracy.

  4. Characterization of fungi in office dust: Comparing results of microbial secondary metabolites, fungal internal transcribed spacer region sequencing, viable culture and other microbial indices.

    PubMed

    Park, J-H; Sulyok, M; Lemons, A R; Green, B J; Cox-Ganser, J M

    2018-05-04

    Recent developments in molecular and chemical methods have enabled the analysis of fungal DNA and secondary metabolites, often produced during fungal growth, in environmental samples. We compared 3 fungal analytical methods by analysing floor dust samples collected from an office building for fungi using viable culture, internal transcribed spacer (ITS) sequencing and secondary metabolites using liquid chromatography-tandem mass spectrometry. Of the 32 metabolites identified, 29 had a potential link to fungi with levels ranging from 0.04 (minimum for alternariol monomethylether) to 5700 ng/g (maximum for neoechinulin A). The number of fungal metabolites quantified per sample ranged from 8 to 16 (average = 13/sample). We identified 216 fungal operational taxonomic units (OTUs) with the number per sample ranging from 6 to 29 (average = 18/sample). We identified 37 fungal species using culture, and the number per sample ranged from 2 to 13 (average = 8/sample). Agreement in identification between ITS sequencing and culturing was weak (kappa = -0.12 to 0.27). The number of cultured fungal species poorly correlated with OTUs, which did not correlate with the number of metabolites. These suggest that using multiple measurement methods may provide an improved understanding of fungal exposures in indoor environments and that secondary metabolites may be considered as an additional source of exposure. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Correlation between human maternal-fetal placental transfer and molecular weight of PCB and dioxin congeners/isomers.

    PubMed

    Mori, Chisato; Nakamura, Noriko; Todaka, Emiko; Fujisaki, Takeyoshi; Matsuno, Yoshiharu; Nakaoka, Hiroko; Hanazato, Masamichi

    2014-11-01

    Establishing methods for the assessment of fetal exposure to chemicals is important for the prevention or prediction of the child's future disease risk. In the present study, we aimed to determine the influence of molecular weight on the likelihood of chemical transfer from mother to fetus via the placenta. The correlation between molecular weight and placental transfer rates of congeners/isomers of polychlorinated biphenyls (PCBs) and dioxins was examined. Twenty-nine sample sets of maternal blood, umbilical cord, and umbilical cord blood were used to measure PCB concentration, and 41 sample sets were used to analyze dioxins. Placental transfer rates were calculated using the concentrations of PCBs, dioxins, and their congeners/isomers within these sample sets. Transfer rate correlated negatively with molecular weight for PCB congeners, normalized using wet and lipid weights. The transfer rates of PCB or dioxin congeners differed from those of total PCBs or dioxins. The transfer rate for dioxin congeners did not always correlate significantly with molecular weight, perhaps because of the small sample size or other factors. Further improvement of the analytical methods for dioxin congeners is required. The findings of the present study suggested that PCBs, dioxins, or their congeners with lower molecular weights are more likely to be transferred from mother to fetus via the placenta. Consideration of chemical molecular weight and transfer rate could therefore contribute to the assessment of fetal exposure. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  7. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  8. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  9. Sampling methods for microbiological analysis of red meat and poultry carcasses.

    PubMed

    Capita, Rosa; Prieto, Miguel; Alonso-Calleja, Carlos

    2004-06-01

    Microbiological analysis of carcasses at slaughterhouses is required in the European Union for evaluating the hygienic performance of carcass production processes as required for effective hazard analysis critical control point implementation. The European Union microbial performance standards refer exclusively to the excision method, even though swabbing using the wet/dry technique is also permitted when correlation between both destructive and nondestructive methods can be established. For practical and economic reasons, the swab technique is the most extensively used carcass surface-sampling method. The main characteristics, advantages, and limitations of the common excision and swabbing methods are described here.

  10. Procedures in the 13C octanoic acid breath test for measurement of gastric emptying: analysis using Bland-Altman methods.

    PubMed

    Clegg, Miriam E; Shafat, Amir

    2010-08-01

    The (13)C octanoic acid breath test (OBT) was first developed as an alternative method of measuring gastric emptying (GE) to scintigraphy. There has been much debate about the test duration and how often measurements need to be taken. This study aims to address these issues. For 78 GE tests using the (13)C OBT, GE lag phase (T(lag)) was calculated while sampling more frequently than the recommended every 15 min. Comparisons between T(lag) were completed using Bland-Altman plots. Similarly, 4 or 6 h test durations were assessed to establish if they yield the same GE half time (T(half)). From one volunteer, samples were taken every 1 min for the first 30 min and then every 15 min until 6 h. GE times were then calculated using different combinations of sampling times. Evidence of a visible T(lag) was also explored from this data. Findings indicated that taking samples every 5 min for the first 30 min instead of every 15 min did not change the GE T(lag) based on Bland-Altman plots. The correlation between these two methods was also high (r(2) = 0.9957). The findings showed that the difference between the two sampling durations 4 and 6 h was large and the correlation between the methods was low (r(2) = 0.8335). Samples taken at a rate of one breath per min indicated lack of a visible T(lag). Sampling for the (13)C OBT should be completed every 15 min for 6 h.

  11. Purge-and-trap ion chromatography for the determination of trace ammonium ion in high-salinity water samples.

    PubMed

    Wang, Po-Yen; Wu, Jing-Yi; Chen, Hung-Jhen; Lin, Tzung-Yi; Wu, Chien-Hou

    2008-04-25

    It has always been assumed that purge-and-trap (P&T) method is only used for the analysis of volatile organic compounds (VOCs) in aqueous samples. In this paper, a novel P&T preconcentrator has been developed for the determination of trace amounts of ammonium ion in high-salinity water samples by ion chromatography (IC). Method performance is evaluated as a function of concentration of assistant purging material, purging time, and flow rate. Under the optimum P&T conditions with the purified nitrogen gas at flow rate 40 mL/min for 15.0 min at 40 degrees C, the overall collection efficiency is independent of the concentration of ammonium over the range 1.2-5.9 microM. The enrichment factor (EF) of ammonium correlates the ratio of the sample volume to the acceptor solution volume in the trap vessel, providing potentially unlimited increase of the ammonium signal. Our results indicate that environmental samples with low levels of ammonium in matrices with high concentrations of sodium can be easily analyzed and the detection limit down to 75 nM (1.35 ppb) level, corresponding to picomole of ammonia in the injected sample. Calibration graph was constructed with ammonium standards ranging from 0.05 to 6.0 microM and the linearity of the present method was good as suggested by the square of correlation coefficients being better than 0.997. Thus, we have demonstrated that the P&T-IC method allows the routine determination of ammonium ion in seawater samples without cation interferences.

  12. Recent advances on terrain database correlation testing

    NASA Astrophysics Data System (ADS)

    Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art

    1998-08-01

    Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.

  13. [Correlation Between Functional Groups and Radical Scavenging Activities of Acidic Polysaccharides from Dendrobium].

    PubMed

    Liao, Ying; Yuan, Wen-yu; Zheng, Wen-ke; Luo, Ao-xue; Fan, Yi-jun

    2015-11-01

    To compare the radical scavenging activity of five different acidic polysaccharides, and to find the correlation with the functional groups. Alkali extraction method and Stepwise ethanol precipitation method were used to extract and concentrate the five Dendrobium polysaccharides, and to determine the contents of sulfuric acid and uronic acid of each kind of acidic polysaccharides, and the scavenging activity to ABTS+ radical and hydroxyl radical. Functional group structures were examined by FTIR Spectrometer. Five kinds of Dendrobium polysaccharides had different ability of scavenging ABTS+ free radical and hydroxyl free radical. Moreover, the study had shown that five kinds of antioxidant activity of acidic polysaccharides had obvious correlation withuronic acid and sulfuric acid. The antioxidant activity of each sample was positively correlated with the content of uronic acid, and negatively correlated with the content of sulfuric acid. Sulfuric acid can inhibit the antioxidant activity of acidic polysaccharide but uronic acid can enhance the free radical scavenging activity. By analyzing the structure characteristics of five acidic polysaccharides, all samples have similar structures, however, Dendrobium denneanum, Dendrobium devonianum and Dendrobium officinale which had β configuration have higher antioxidant activity than Dendrobium nobile and Dendrobium fimbriatum which had a configuration.

  14. Determination of Dornic Acidity as a Method to Select Donor Milk in a Milk Bank

    PubMed Central

    Garcia-Lara, Nadia Raquel; Escuder-Vieco, Diana; Chaves-Sánchez, Fernando; De la Cruz-Bertolo, Javier; Pallas-Alonso, Carmen Rosa

    2013-01-01

    Abstract Background Dornic acidity may be an indirect measurement of milk's bacteria content and its quality. There are no uniform criteria among different human milk banks on milk acceptance criteria. The main aim of this study is to report the correlation between Dornic acidity and bacterial growth in donor milk in order to validate the Dornic acidity value as an adequate method to select milk prior to its pasteurization. Materials and Methods From 105 pools, 4-mL samples of human milk were collected. Dornic acidity measurement and culture in blood and McConkey's agar cultures were performed. Based on Dornic acidity degrees, we classified milk into three quality categories: top quality (acidity <4°D), intermediate (acidity between 4°D and 7°D), and milk unsuitable to be consumed (acidity ≥8°D). Spearman's correlation coefficient was used to perform statistical analysis. Results Seventy percent of the samples had Dornic acidity under 4°D, and 88% had a value under 8°D. A weak positive correlation was observed between the bacterial growth in milk and Dornic acidity. The overall discrimination performance of Dornic acidity was higher for predicting growth of Gram-negative organisms. In milk with Dornic acidity of ≥4°D, such a measurement has a sensitivity of 100% for detecting all the samples with bacterial growth with Gram-negative bacteria of over 105 colony-forming units/mL. Conclusions The correlation between Dornic acidity and bacterial growth in donor milk is weak but positive. The measurement of Dornic acidity could be considered as a simple and economical method to select milk to pasteurize in a human milk bank based in quality and safety criteria. PMID:23373435

  15. Evaluation of conventional and alternative monitoring methods for a recreational marine beach with nonpoint source of fecal contamination.

    PubMed

    Shibata, Tomoyuki; Solo-Gabriele, Helena M; Sinigalliano, Christopher D; Gidley, Maribeth L; Plano, Lisa R W; Fleisher, Jay M; Wang, John D; Elmir, Samir M; He, Guoqing; Wright, Mary E; Abdelzaher, Amir M; Ortega, Cristina; Wanless, David; Garza, Anna C; Kish, Jonathan; Scott, Troy; Hollenbeck, Julie; Backer, Lorraine C; Fleming, Lora E

    2010-11-01

    The objectives of this work were to compare enterococci (ENT) measurements based on the membrane filter, ENT(MF) with alternatives that can provide faster results including alternative enterococci methods (e.g., chromogenic substrate (CS), and quantitative polymerase chain reaction (qPCR)), and results from regression models based upon environmental parameters that can be measured in real-time. ENT(MF) were also compared to source tracking markers (Staphylococcus aureus, Bacteroidales human and dog markers, and Catellicoccus gull marker) in an effort to interpret the variability of the signal. Results showed that concentrations of enterococci based upon MF (<2 to 3320 CFU/100 mL) were significantly different from the CS and qPCR methods (p < 0.01). The correlations between MF and CS (r = 0.58, p < 0.01) were stronger than between MF and qPCR (r ≤ 0.36, p < 0.01). Enterococci levels by MF, CS, and qPCR methods were positively correlated with turbidity and tidal height. Enterococci by MF and CS were also inversely correlated with solar radiation but enterococci by qPCR was not. The regression model based on environmental variables provided fair qualitative predictions of enterococci by MF in real-time, for daily geometric mean levels, but not for individual samples. Overall, ENT(MF) was not significantly correlated with source tracking markers with the exception of samples collected during one storm event. The inability of the regression model to predict ENT(MF) levels for individual samples is likely due to the different sources of ENT impacting the beach at any given time, making it particularly difficult to to predict short-term variability of ENT(MF) for environmental parameters.

  16. The cluster-cluster correlation function. [of galaxies

    NASA Technical Reports Server (NTRS)

    Postman, M.; Geller, M. J.; Huchra, J. P.

    1986-01-01

    The clustering properties of the Abell and Zwicky cluster catalogs are studied using the two-point angular and spatial correlation functions. The catalogs are divided into eight subsamples to determine the dependence of the correlation function on distance, richness, and the method of cluster identification. It is found that the Corona Borealis supercluster contributes significant power to the spatial correlation function to the Abell cluster sample with distance class of four or less. The distance-limited catalog of 152 Abell clusters, which is not greatly affected by a single system, has a spatial correlation function consistent with the power law Xi(r) = 300r exp -1.8. In both the distance class four or less and distance-limited samples the signal in the spatial correlation function is a power law detectable out to 60/h Mpc. The amplitude of Xi(r) for clusters of richness class two is about three times that for richness class one clusters. The two-point spatial correlation function is sensitive to the use of estimated redshifts.

  17. Evaluation Methodology for Surface Engineering Techniques to Improve Powertrain Efficiency in Military Vehicles

    DTIC Science & Technology

    2012-06-01

    Conducting metrology, surface analysis, and metallography/ fractography interrogations of samples to correlate microstructure with friction...are examined using a variety of methods such as metallography, chemical analysis, fractography , and hardness measurements. These methods assist in

  18. Trace element contamination in feather and tissue samples from Anna’s hummingbirds

    USGS Publications Warehouse

    Mikoni, Nicole A.; Poppenga, Robert H.; Ackerman, Joshua T.; Foley, Janet E.; Hazlehurst, Jenny; Purdin, Güthrum; Aston, Linda; Hargrave, Sabine; Jelks, Karen; Tell, Lisa A.

    2017-01-01

    Trace element contamination (17 elements; Be, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Ba, Hg, Tl, and Pb) of live (feather samples only) and deceased (feather and tissue samples) Anna's hummingbirds (Calypte anna) was evaluated. Samples were analyzed using inductively coupled plasma-mass spectrometry (ICP-MS; 17 elements) and atomic absorption spectrophotometry (Hg only). Mean plus one standard deviation (SD) was considered the benchmark, and concentrations above the mean + 1 SD were considered elevated above normal. Contour feathers were sampled from live birds of varying age, sex, and California locations. In order to reduce thermal impacts, minimal feathers were taken from live birds, therefore a novel method was developed for preparation of low mass feather samples for ICP-MS analysis. The study found that the novel feather preparation method enabled small mass feather samples to be analyzed for trace elements using ICP-MS. For feather samples from live birds, all trace elements, with the exception of beryllium, had concentrations above the mean + 1 SD. Important risk factors for elevated trace element concentrations in feathers of live birds were age for iron, zinc, and arsenic, and location for iron, manganese, zinc, and selenium. For samples from deceased birds, ICP-MS results from body and tail feathers were correlated for Fe, Zn, and Pb, and feather concentrations were correlated with renal (Fe, Zn, Pb) or hepatic (Hg) tissue concentrations. Results for AA spectrophotometry analyzed samples from deceased birds further supported the ICP-MS findings where a strong correlation between mercury concentrations in feather and tissue (pectoral muscle) samples was found. These study results support that sampling feathers from live free-ranging hummingbirds might be a useful, non-lethal sampling method for evaluating trace element exposure and provides a sampling alternative since their small body size limits traditional sampling of blood and tissues. The results from this study provide a benchmark for the distribution of trace element concentrations in feather and tissue samples from hummingbirds and suggests a reference mark for exceeding normal. Lastly, pollinating avian species are minimally represented in the literature as bioindicators for environmental trace element contamination. Given that trace elements can move through food chains by a variety of routes, our study indicates that hummingbirds are possible bioindicators of environmental trace element contamination.

  19. Body Adiposity Index Performance in Estimating Body Fat Percentage in Colombian College Students: Findings from the FUPRECOL-Adults Study.

    PubMed

    Ramírez-Vélez, Robinson; Correa-Bautista, Jorge Enrique; González-Ruíz, Katherine; Vivas, Andrés; Triana-Reina, Héctor Reynaldo; Martínez-Torres, Javier; Prieto-Benavides, Daniel Humberto; Carrillo, Hugo Alejandro; Ramos-Sepúlveda, Jeison Alexander; Villa-González, Emilio; García-Hermoso, Antonio

    2017-01-17

    Recently, a body adiposity index (BAI = (hip circumference)/((height)(1.5)) -18 ) was developed and validated in adult populations. The aim of this study was to evaluate the performance of BAI in estimating percentage body fat (BF%) in a sample of Colombian collegiate young adults. The participants were comprised of 903 volunteers (52% females, mean age = 21.4 years ± 3.3). We used the Lin's concordance correlation coefficient, linear regression, Bland-Altman's agreement analysis, concordance correlation coefficient ( ρc ) and the coefficient of determination ( R ²) between BAI, and BF%; by bioelectrical impedance analysis (BIA)). The correlation between the two methods of estimating BF% was R ² = 0.384, p < 0.001. A paired-sample t -test showed a difference between the methods (BIA BF% = 16.2 ± 3.1, BAI BF% = 30.0 ± 5.4%; p < 0.001). For BIA, bias value was 6.0 ± 6.2 BF% (95% confidence interval (CI) = -6.0 to 18.2), indicating that the BAI method overestimated BF% relative to the reference method. Lin's concordance correlation coefficient was poor ( ρc = 0.014, 95% CI = -0.124 to 0.135; p = 0.414). In Colombian college students, there was poor agreement between BAI- and BIA-based estimates of BF%, and so BAI is not accurate in people with low or high body fat percentage levels.

  20. DISCO: Distance and Spectrum Correlation Optimization Alignment for Two Dimensional Gas Chromatography Time-of-Flight Mass Spectrometry-based Metabolomics

    PubMed Central

    Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang

    2010-01-01

    A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746

  1. Functional Analyses of NSF1 in Wine Yeast Using Interconnected Correlation Clustering and Molecular Analyses

    PubMed Central

    Bessonov, Kyrylo; Walkey, Christopher J.; Shelp, Barry J.; van Vuuren, Hennie J. J.; Chiu, David; van der Merwe, George

    2013-01-01

    Analyzing time-course expression data captured in microarray datasets is a complex undertaking as the vast and complex data space is represented by a relatively low number of samples as compared to thousands of available genes. Here, we developed the Interdependent Correlation Clustering (ICC) method to analyze relationships that exist among genes conditioned on the expression of a specific target gene in microarray data. Based on Correlation Clustering, the ICC method analyzes a large set of correlation values related to gene expression profiles extracted from given microarray datasets. ICC can be applied to any microarray dataset and any target gene. We applied this method to microarray data generated from wine fermentations and selected NSF1, which encodes a C2H2 zinc finger-type transcription factor, as the target gene. The validity of the method was verified by accurate identifications of the previously known functional roles of NSF1. In addition, we identified and verified potential new functions for this gene; specifically, NSF1 is a negative regulator for the expression of sulfur metabolism genes, the nuclear localization of Nsf1 protein (Nsf1p) is controlled in a sulfur-dependent manner, and the transcription of NSF1 is regulated by Met4p, an important transcriptional activator of sulfur metabolism genes. The inter-disciplinary approach adopted here highlighted the accuracy and relevancy of the ICC method in mining for novel gene functions using complex microarray datasets with a limited number of samples. PMID:24130853

  2. Lowering sample size in comparative analyses can indicate a correlation where there is none: example from Rensch's rule in primates.

    PubMed

    Lindenfors, P; Tullberg, B S

    2006-07-01

    The fact that characters may co-vary in organism groups because of shared ancestry and not always because of functional correlations was the initial rationale for developing phylogenetic comparative methods. Here we point out a case where similarity due to shared ancestry can produce an undesired effect when conducting an independent contrasts analysis. Under special circumstances, using a low sample size will produce results indicating an evolutionary correlation between characters where an analysis of the same pattern utilizing a larger sample size will show that this correlation does not exist. This is the opposite effect of increased sample size to that expected; normally an increased sample size increases the chance of finding a correlation. The situation where the problem occurs is when co-variation between the two continuous characters analysed is clumped in clades; e.g. when some phylogenetically conservative factors affect both characters simultaneously. In such a case, the correlation between the two characters becomes contingent on the number of clades sharing this conservative factor that are included in the analysis, in relation to the number of species contained within these clades. Removing species scattered evenly over the phylogeny will in this case remove the exact variation that diffuses the evolutionary correlation between the two characters - the variation contained within the clades sharing the conservative factor. We exemplify this problem by discussing a parallel in nature where the described problem may be of importance. This concerns the question of the presence or absence of Rensch's rule in primates.

  3. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  4. High-throughput analysis of lipid hydroperoxides in edible oils and fats using the fluorescent reagent diphenyl-1-pyrenylphosphine.

    PubMed

    Santas, Jonathan; Guzmán, Yeimmy J; Guardiola, Francesc; Rafecas, Magdalena; Bou, Ricard

    2014-11-01

    A fluorometric method for the determination of hydroperoxides (HP) in edible oils and fats using the reagent diphenyl-1-pyrenylphosphine (DPPP) was developed and validated. Two solvent media containing 100% butanol or a mixture of chloroform/methanol (2:1, v/v) can be used to solubilise lipid samples. Regardless of the solvent used to solubilise the sample, the DPPP method was precise, accurate, sensitive and easy to perform. The HP content of 43 oil and fat samples was determined and the results were compared with those obtained by means of the AOCS Official Method for the determination of peroxide value (PV) and the ferrous oxidation-xylenol orange (FOX) method. The proposed method not only correlates well with the PV and FOX methods, but also presents some advantages such as requiring low sample and solvent amounts and being suitable for high-throughput sample analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change

    ERIC Educational Resources Information Center

    Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von

    2006-01-01

    We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…

  6. The Effect of Sample Size on Parametric and Nonparametric Factor Analytical Methods

    ERIC Educational Resources Information Center

    Kalkan, Ömür Kaya; Kelecioglu, Hülya

    2016-01-01

    Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…

  7. Mental Health and Clinical Correlates in Lesbian, Gay, Bisexual, and Queer Young Adults

    ERIC Educational Resources Information Center

    Grant, Jon E.; Odlaug, Brian L.; Derbyshire, Katherine; Schreiber, Liana R. N.; Lust, Katherine; Christenson, Gary

    2014-01-01

    Objective: This study examined the prevalence of mental health disorders and their clinical correlates in a university sample of lesbian, gay, bisexual, and queer (LGBQ) students. Participants: College students at a large public university. Methods: An anonymous, voluntary survey was distributed via random e-mail generation to university students…

  8. What to Do about Zero Frequency Cells when Estimating Polychoric Correlations

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2011-01-01

    Categorical structural equation modeling (SEM) methods that fit the model to estimated polychoric correlations have become popular in the social sciences. When population thresholds are high in absolute value, contingency tables in small samples are likely to contain zero frequency cells. Such cells make the estimation of the polychoric…

  9. Enhanced sampling simulations of DNA step parameters.

    PubMed

    Karolak, Aleksandra; van der Vaart, Arjan

    2014-12-15

    A novel approach for the selection of step parameters as reaction coordinates in enhanced sampling simulations of DNA is presented. The method uses three atoms per base and does not require coordinate overlays or idealized base pairs. This allowed for a highly efficient implementation of the calculation of all step parameters and their Cartesian derivatives in molecular dynamics simulations. Good correlation between the calculated and actual twist, roll, tilt, shift, and slide parameters is obtained, while the correlation with rise is modest. The method is illustrated by its application to the methylated and unmethylated 5'-CATGTGACGTCACATG-3' double stranded DNA sequence. One-dimensional umbrella simulations indicate that the flexibility of the central CG step is only marginally affected by methylation. © 2014 Wiley Periodicals, Inc.

  10. THE RADIO/GAMMA-RAY CONNECTION IN ACTIVE GALACTIC NUCLEI IN THE ERA OF THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Allafort, A.

    We present a detailed statistical analysis of the correlation between radio and gamma-ray emission of the active galactic nuclei (AGNs) detected by Fermi during its first year of operation, with the largest data sets ever used for this purpose. We use both archival interferometric 8.4 GHz data (from the Very Large Array and ATCA, for the full sample of 599 sources) and concurrent single-dish 15 GHz measurements from the Owens Valley Radio Observatory (OVRO, for a sub sample of 199 objects). Our unprecedentedly large sample permits us to assess with high accuracy the statistical significance of the correlation, using amore » surrogate data method designed to simultaneously account for common-distance bias and the effect of a limited dynamical range in the observed quantities. We find that the statistical significance of a positive correlation between the centimeter radio and the broadband (E > 100 MeV) gamma-ray energy flux is very high for the whole AGN sample, with a probability of <10{sup -7} for the correlation appearing by chance. Using the OVRO data, we find that concurrent data improve the significance of the correlation from 1.6 x 10{sup -6} to 9.0 x 10{sup -8}. Our large sample size allows us to study the dependence of correlation strength and significance on specific source types and gamma-ray energy band. We find that the correlation is very significant (chance probability < 10{sup -7}) for both flat spectrum radio quasars and BL Lac objects separately; a dependence of the correlation strength on the considered gamma-ray energy band is also present, but additional data will be necessary to constrain its significance.« less

  11. The radio/gamma-ray connection in active galactic nuclei in the era of the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Ajello, M.; Allafort, A.; ...

    2011-10-12

    We present a detailed statistical analysis of the correlation between radio and gamma-ray emission of the active galactic nuclei (AGNs) detected by Fermi during its first year of operation, with the largest data sets ever used for this purpose. We use both archival interferometric 8.4 GHz data (from the Very Large Array and ATCA, for the full sample of 599 sources) and concurrent single-dish 15 GHz measurements from the Owens Valley Radio Observatory (OVRO, for a sub sample of 199 objects). Our unprecedentedly large sample permits us to assess with high accuracy the statistical significance of the correlation, using amore » surrogate data method designed to simultaneously account for common-distance bias and the effect of a limited dynamical range in the observed quantities. We find that the statistical significance of a positive correlation between the centimeter radio and the broadband (E > 100 MeV) gamma-ray energy flux is very high for the whole AGN sample, with a probability of <10 –7 for the correlation appearing by chance. Using the OVRO data, we find that concurrent data improve the significance of the correlation from 1.6 × 10 –6 to 9.0 × 10 –8. Our large sample size allows us to study the dependence of correlation strength and significance on specific source types and gamma-ray energy band. As a result, we find that the correlation is very significant (chance probability < 10 –7) for both flat spectrum radio quasars and BL Lac objects separately; a dependence of the correlation strength on the considered gamma-ray energy band is also present, but additional data will be necessary to constrain its significance.« less

  12. The Radio/Gamma-Ray Connection in Active Galactic Nuclei in the Era of the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Allafort, A.; Angelakis, E.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; hide

    2011-01-01

    We present a detailed statistical analysis of the correlation between radio and gamma-ray emission of the active galactic nuclei (AGNs) detected by Fermi during its first year of operation, with the largest data sets ever used for this purpose.We use both archival interferometric 8.4 GHz data (from the Very Large Array and ATCA, for the full sample of 599 sources) and concurrent single-dish 15 GHz measurements from the OwensValley RadioObservatory (OVRO, for a sub sample of 199 objects). Our unprecedentedly large sample permits us to assess with high accuracy the statistical significance of the correlation, using a surrogate data method designed to simultaneously account for common-distance bias and the effect of a limited dynamical range in the observed quantities. We find that the statistical significance of a positive correlation between the centimeter radio and the broadband (E > 100 MeV) gamma-ray energy flux is very high for the whole AGN sample, with a probability of <10(exp -7) for the correlation appearing by chance. Using the OVRO data, we find that concurrent data improve the significance of the correlation from 1.6 10(exp -6) to 9.0 10(exp -8). Our large sample size allows us to study the dependence of correlation strength and significance on specific source types and gamma-ray energy band. We find that the correlation is very significant (chance probability < 10(exp -7)) for both flat spectrum radio quasars and BL Lac objects separately; a dependence of the correlation strength on the considered gamma-ray energy band is also present, but additional data will be necessary to constrain its significance.

  13. Synthesizing Information From Language Samples and Standardized Tests in School-Age Bilingual Assessment

    PubMed Central

    Pham, Giang

    2017-01-01

    Purpose Although language samples and standardized tests are regularly used in assessment, few studies provide clinical guidance on how to synthesize information from these testing tools. This study extends previous work on the relations between tests and language samples to a new population—school-age bilingual speakers with primary language impairment—and considers the clinical implications for bilingual assessment. Method Fifty-one bilingual children with primary language impairment completed narrative language samples and standardized language tests in English and Spanish. Children were separated into younger (ages 5;6 [years;months]–8;11) and older (ages 9;0–11;2) groups. Analysis included correlations with age and partial correlations between language sample measures and test scores in each language. Results Within the younger group, positive correlations with large effect sizes indicated convergence between test scores and microstructural language sample measures in both Spanish and English. There were minimal correlations in the older group for either language. Age related to English but not Spanish measures. Conclusions Tests and language samples complement each other in assessment. Wordless picture-book narratives may be more appropriate for ages 5–8 than for older children. We discuss clinical implications, including a case example of a bilingual child with primary language impairment, to illustrate how to synthesize information from these tools in assessment. PMID:28055056

  14. Correlation to FVIII:C in Two Thrombin Generation Tests: TGA-CAT and INNOVANCE ETP.

    PubMed

    Ljungkvist, Marcus; Berndtsson, Maria; Holmström, Margareta; Mikovic, Danijela; Elezovic, Ivo; Antovic, Jovan P; Zetterberg, Eva; Berntorp, Erik

    2017-01-01

    Several thrombin-generation tests are available, but few have been directly compared. Our primary aim was to investigate the correlation of two thrombin generation tests, thrombin generation assay-calibrated automated thrombogram (TGA-CAT) and INNOVANCE ETP, to factor VIII levels (FVIII:C) in a group of patients with hemophilia A. The secondary aim was to investigate inter-laboratory variation for the TGA-CAT method. Blood samples were taken from 45 patients with mild, moderate and severe hemophilia A. The TGA-CAT method was performed at both centers while the INNOVANCE ETP was only performed at the Stockholm center. Correlation between parameters was evaluated using Spearman's rank correlation test. For determination of the TGA-CAT inter-laboratory variability, Bland-Altman plots were used. The correlation for the INNOVANCE ETP and TGA-CAT methods with FVIII:C in persons with hemophilia (PWH) was r=0.701 and r=0.734 respectively.The correlation between the two methods was r=0.546.When dividing the study material into disease severity groups (mild, moderate and severe) based on FVIII levels, both methods fail to discriminate between them.The variability of the TGA-CAT results performed at the two centers was reduced after normalization; before normalization, 29% of values showed less than ±10% difference while after normalization the number increased to 41%. Both methods correlate in an equal manner to FVIII:C in PWH but show a poor correlation with each other. The level of agreement for the TGA-CAT method was poor though slightly improved after normalization of data. Further improvement of standardization of these methods is warranted.

  15. Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    IceCube Collaboration; Pierre Auger Collaboration; Telescope Array Collaboration

    2016-01-01

    This paper presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECRmore » magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.« less

  16. Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array

    DOE PAGES

    Aartsen, M. G.

    2016-01-20

    This study presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECRmore » magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.« less

  17. Multidimensional Normalization to Minimize Plate Effects of Suspension Bead Array Data.

    PubMed

    Hong, Mun-Gwan; Lee, Woojoo; Nilsson, Peter; Pawitan, Yudi; Schwenk, Jochen M

    2016-10-07

    Enhanced by the growing number of biobanks, biomarker studies can now be performed with reasonable statistical power by using large sets of samples. Antibody-based proteomics by means of suspension bead arrays offers one attractive approach to analyze serum, plasma, or CSF samples for such studies in microtiter plates. To expand measurements beyond single batches, with either 96 or 384 samples per plate, suitable normalization methods are required to minimize the variation between plates. Here we propose two normalization approaches utilizing MA coordinates. The multidimensional MA (multi-MA) and MA-loess both consider all samples of a microtiter plate per suspension bead array assay and thus do not require any external reference samples. We demonstrate the performance of the two MA normalization methods with data obtained from the analysis of 384 samples including both serum and plasma. Samples were randomized across 96-well sample plates, processed, and analyzed in assay plates, respectively. Using principal component analysis (PCA), we could show that plate-wise clusters found in the first two components were eliminated by multi-MA normalization as compared with other normalization methods. Furthermore, we studied the correlation profiles between random pairs of antibodies and found that both MA normalization methods substantially reduced the inflated correlation introduced by plate effects. Normalization approaches using multi-MA and MA-loess minimized batch effects arising from the analysis of several assay plates with antibody suspension bead arrays. In a simulated biomarker study, multi-MA restored associations lost due to plate effects. Our normalization approaches, which are available as R package MDimNormn, could also be useful in studies using other types of high-throughput assay data.

  18. Comparison of Collection Methods for Fecal Samples for Discovery Metabolomics in Epidemiologic Studies.

    PubMed

    Loftfield, Erikka; Vogtmann, Emily; Sampson, Joshua N; Moore, Steven C; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Sinha, Rashmi

    2016-11-01

    The gut metabolome may be associated with the incidence and progression of numerous diseases. The composition of the gut metabolome can be captured by measuring metabolite levels in the feces. However, there are little data describing the effect of fecal sample collection methods on metabolomic measures. We collected fecal samples from 18 volunteers using four methods: no solution, 95% ethanol, fecal occult blood test (FOBT) cards, and fecal immunochemical test (FIT). One set of samples was frozen after collection (day 0), and for 95% ethanol, FOBT, and FIT, a second set was frozen after 96 hours at room temperature. We evaluated (i) technical reproducibility within sample replicates, (ii) stability after 96 hours at room temperature for 95% ethanol, FOBT, and FIT, and (iii) concordance of metabolite measures with the putative "gold standard," day 0 samples without solution. Intraclass correlation coefficients (ICC) estimating technical reproducibility were high for replicate samples for each collection method. ICCs estimating stability at room temperature were high for 95% ethanol and FOBT (median ICC > 0.87) but not FIT (median ICC = 0.52). Similarly, Spearman correlation coefficients (r s ) estimating metabolite concordance with the "gold standard" were higher for 95% ethanol (median r s = 0.82) and FOBT (median r s = 0.70) than for FIT (median r s = 0.40). Metabolomic measurements appear reproducible and stable in fecal samples collected with 95% ethanol or FOBT. Concordance with the "gold standard" is highest with 95% ethanol and acceptable with FOBT. Future epidemiologic studies should collect feces using 95% ethanol or FOBT if interested in studying fecal metabolomics. Cancer Epidemiol Biomarkers Prev; 25(11); 1483-90. ©2016 AACR. ©2016 American Association for Cancer Research.

  19. Wetlands Research Program. Evaluation of Methods for Sampling Vegetation and Delineating Wetlands Transition Zones in Southern Louisiana, January 1979-May 1981.

    DTIC Science & Technology

    1983-10-01

    sites should be collected and correlated to vegetational response through ordination or factor analysis techniques. 3. If site-specific methods are...specified number of samples is collected . If the 40 ."l ’ - - - .. -’-. - : -. .~ -, .4 .- . -.- , - ...... rb. . o* . -.% o - .- * .-.- v- . J...only, although the data collected were sufficient for determination of such parameters as density, cover, frequency, basal area, and importance value

  20. Basic or extended urine sampling to analyse urine production?

    PubMed

    Denys, Marie-Astrid; Kapila, Vansh; Weiss, Jeffrey; Goessaert, An-Sofie; Everaert, Karel

    2017-09-01

    Frequency volume charts are valuable tools to objectify urine production in patients with nocturia, enuresis or nocturnal incontinence. Analyses of daytime and nighttime urine (=basic collection) or analyses of urine samples collected every 3 h (=extended collection) extend this evaluation by describing circadian patterns of water and solute diuresis (=renal function profiles). To assess intra-individual correlation and agreement between renal function profiles provided using basic and extended urine collections, and using two extended urine collections. To create a short-form of the extended collection. This prospective observational study was executed at Ghent University Hospital, Belgium. Study participation was open for anyone visiting the hospital. Participants collected one basic and two extended 24-h urine collections. Urinary levels of osmolality, sodium and creatinine were determined. There was a moderate to strong correlation between results of basic and extended urinalyses. Comparing both extended urinalyses showed a moderate correlation between the eight individual samples and a weak to strong correlation between the mean daytime and nighttime values of renal functions. Different samples could be considered as most representative for mean daytime values, while all samples collected between 03 and 05am showed the highest agreement with mean nighttime values of renal function. Since there is a good correlation and agreement between basic and extended urine collections to study the mechanisms underlying urine production, the choice of urine sampling method to evaluate urine production depends on the purpose. A nighttime-only urine sample collected between 03 and 05am may be the most practical approach. © 2017 Wiley Periodicals, Inc.

  1. Correlation of skeletal maturation stages determined by cervical vertebrae and hand-wrist evaluations.

    PubMed

    Flores-Mir, Carlos; Burgess, Corr A; Champney, Mitchell; Jensen, Robert J; Pitcher, Micheal R; Major, Paul W

    2006-01-01

    The aim of this study was to assess the correlation between the Fishman maturation prediction method (FMP) and the cervical vertebral maturation (CVM) method for skeletal maturation stage determination. Hand-wrist and lateral cephalograms from 79 subjects (52 females and 27 males) were used. Hand-wrist radiographs were analyzed using the FMP to determine skeletal maturation level (advanced, average, or delayed) and stage (relative position of the individual in the pubertal growth curve). Cervical vertebrae (C2, C3, and C4) outlines obtained from lateral cephalograms were analyzed using the CVM to determine skeletal maturation stage. Intraexaminer reliability (Intraclass correlation coefficient [ICC]) for both methods was calculated from 10 triplicate hand-wrist and lateral cephalograms from the same patients. An ICC coefficient of 0.985 for FMP and an ICC of 0.889 for CVM were obtained. A Spearman correlation value of 0.72 (P < .001) was found between the skeletal maturation stages of both methods. When the sample was subgrouped according to skeletal maturation level, the following correlation values were found: for early mature adolescents 0.73, for average mature adolescents 0.70, and for late mature adolescents 0.87. All these correlation values were statistically different from zero (P < .024). Correlation values between both skeletal maturation methods were moderately high. This may be high enough to use either of the methods indistinctively for research purposes but not for the assessment of individual patients. Skeletal level influences the correlation values and, therefore, it should be considered whenever possible.

  2. Detection of Cryptosporidium and Cyclospora Oocysts from Environmental Water for Drinking and Recreational Activities in Sarawak, Malaysia.

    PubMed

    Bilung, Lesley Maurice; Tahar, Ahmad Syatir; Yunos, Nur Emyliana; Apun, Kasing; Lim, Yvonne Ai-Lian; Nillian, Elexson; Hashim, Hashimatul Fatma

    2017-01-01

    Cryptosporidiosis and cyclosporiasis are caused by waterborne coccidian protozoan parasites of the genera Cryptosporidium and Cyclospora, respectively. This study was conducted to detect Cryptosporidium and Cyclospora oocysts from environmental water abstracted by drinking water treatment plants and recreational activities in Sarawak, Malaysia. Water samples (12 each) were collected from Sungai Sarawak Kanan in Bau and Sungai Sarawak Kiri in Batu Kitang, respectively. In addition, 6 water samples each were collected from Ranchan Recreational Park and UNIMAS Lake at Universiti Malaysia Sarawak, Kota Samarahan, respectively. Water physicochemical parameters were also recorded. All samples were concentrated by the iron sulfate flocculation method followed by the sucrose floatation technique. Cryptosporidium and Cyclospora were detected by modified Ziehl-Neelsen technique. Correlation of the parasites distribution with water physicochemical parameters was analysed using bivariate Pearson correlation. Based on the 24 total samples of environmental water abstracted by drinking water treatment plants, all the samples (24/24; 100%) were positive with Cryptosporidium , and only 2 samples (2/24; 8.33%) were positive with Cyclospora . Based on the 12 total samples of water for recreational activities, 4 samples (4/12; 33%) were positive with Cryptosporidium , while 2 samples (2/12; 17%) were positive with Cyclospora . Cryptosporidium oocysts were negatively correlated with dissolved oxygen (DO).

  3. Detection of Cryptosporidium and Cyclospora Oocysts from Environmental Water for Drinking and Recreational Activities in Sarawak, Malaysia

    PubMed Central

    Tahar, Ahmad Syatir; Yunos, Nur Emyliana; Apun, Kasing; Nillian, Elexson; Hashim, Hashimatul Fatma

    2017-01-01

    Cryptosporidiosis and cyclosporiasis are caused by waterborne coccidian protozoan parasites of the genera Cryptosporidium and Cyclospora, respectively. This study was conducted to detect Cryptosporidium and Cyclospora oocysts from environmental water abstracted by drinking water treatment plants and recreational activities in Sarawak, Malaysia. Water samples (12 each) were collected from Sungai Sarawak Kanan in Bau and Sungai Sarawak Kiri in Batu Kitang, respectively. In addition, 6 water samples each were collected from Ranchan Recreational Park and UNIMAS Lake at Universiti Malaysia Sarawak, Kota Samarahan, respectively. Water physicochemical parameters were also recorded. All samples were concentrated by the iron sulfate flocculation method followed by the sucrose floatation technique. Cryptosporidium and Cyclospora were detected by modified Ziehl-Neelsen technique. Correlation of the parasites distribution with water physicochemical parameters was analysed using bivariate Pearson correlation. Based on the 24 total samples of environmental water abstracted by drinking water treatment plants, all the samples (24/24; 100%) were positive with Cryptosporidium, and only 2 samples (2/24; 8.33%) were positive with Cyclospora. Based on the 12 total samples of water for recreational activities, 4 samples (4/12; 33%) were positive with Cryptosporidium, while 2 samples (2/12; 17%) were positive with Cyclospora. Cryptosporidium oocysts were negatively correlated with dissolved oxygen (DO). PMID:29234679

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalal, M.; Mallick, A.; Mahapatra, A.S.

    Highlights: • Cation distribution in tetrahedral and octahedral sites of spinel Ni{sub 0.4}Zn{sub 0.4}Co{sub 0.2}Fe{sub 2}O{sub 4}. • Structural analysis of observed X-ray diffraction pattern using Rietveld method. • Study of hyperfine behaviour using Mössbauer spectroscopy. • Static and dynamic magnetic measurements. • Correlation of cation distributions obtained from Rietveld analysis with the results of magnetic and Mössbauer effect measurements. - Abstract: Nanoparticles of Ni{sub 0.4}Zn{sub 0.4}Co{sub 0.2}Fe{sub 2}O{sub 4} are prepared by a simple co-precipitation method. The as dried sample is heat treated at 400, 500, 600, 700 and 800 °C to obtain different sizes of nanoparticles. The crystallographicmore » phase of the samples is confirmed analyzing observed X-ray diffraction (XRD) by Rietveld method. Hyperfine parameters of the samples are derived from room temperature (RT) Mössbauer spectra of the samples. Magnetic properties of the samples are investigated by static and dynamic hysteresis loops. Different magneto-crystalline parameters are calculated from the variation of magnetization with temperature (M–T curve) under zero field cooled (ZFC) and field cooled (FC) conditions of the as dried sample. The cation distribution estimated from Rietveld analysis are correlated with the results of magnetic and Mössbauer effect measurements. The observed high value of saturation magnetization (72.7 emu/g at RT) of the sample annealed at 800 °C would be interesting for applications in different electromagnetic devices.« less

  5. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  6. Accurate and fiducial-marker-free correction for three-dimensional chromatic shift in biological fluorescence microscopy.

    PubMed

    Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi

    2018-05-15

    Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.

  7. [Establishment of simultaneous measurement method of 8 salivary components using urinary test paper and clinical evaluation of oral environment].

    PubMed

    Yuuki, Kenji; Tsukasaki, Hiroaki; Kawawa, Tadaharu; Shiba, Akihiko; Shiba, Kiyoko

    2008-07-01

    Clinical findings were compared with glucose, protein, albumin, bilirubin, creatinine, pH, occult blood, ketone body, nitrite, and white blood cells contained in whole saliva to investigate the components that most markedly reflect the periodontal condition. The subjects were staff of the Prosthodontics Department, Showa University, and patients who visited for dental treatments (57 subjects in total). At the first time, saliva samples were gargled with 1.5 ml of distilled water for 15 seconds and collected by spitting out into a paper cup. At the second time, saliva samples were collected by the same method. At the third time, saliva samples after chewing paraffin gum for 60 seconds were collected by spitting out into a paper cup. Thus whole saliva collecting that was divided on three times. After sampling, 8 mul of the saliva sample was dripped in reagent sticks for the 10 items of urinary test paper and the reflectance was measured using a specific reflectometer. In the periodontal tissue evaluation, the degree of alveolar bone resorption, probing value, and tooth mobility and the presence or absence of lesions in the root furcation were examined and classified into 4 ranks. The mean values in each periodontal disease rank and correlation between the periodontal disease ranks and the components were statistically analyzed. Bilirubin and ketone body were not measurable. The components density of the 8 items was increased as the periodontal disease rank increased. Regarding the correlation between the periodontal disease ranks and the components, high correlations were noted for protein, albumin, creatinine, pH, and white blood cells. The simultaneous measurement method of 8 salivary components using test paper may be very useful for the diagnosis of periodontal disease of abutment teeth.

  8. Further Investigating Method Effects Associated with Negatively Worded Items on Self-Report Surveys

    ERIC Educational Resources Information Center

    DiStefano, Christine; Motl, Robert W.

    2006-01-01

    This article used multitrait-multimethod methodology and covariance modeling for an investigation of the presence and correlates of method effects associated with negatively worded items on the Rosenberg Self-Esteem (RSE) scale (Rosenberg, 1989) using a sample of 757 adults. Results showed that method effects associated with negative item phrasing…

  9. Using Epidemiologic Methods to Test Hypotheses regarding Causal Influences on Child and Adolescent Mental Disorders

    ERIC Educational Resources Information Center

    Lahey, Benjamin B.; D'Onofrio, Brian M.; Waldman, Irwin D.

    2009-01-01

    Epidemiology uses strong sampling methods and study designs to test refutable hypotheses regarding the causes of important health, mental health, and social outcomes. Epidemiologic methods are increasingly being used to move developmental psychopathology from studies that catalogue correlates of child and adolescent mental health to designs that…

  10. Simultaneous determination of nicotine, cotinine, and nicotine N-oxide in human plasma, semen, and sperm by LC-Orbitrap MS.

    PubMed

    Abu-Awwad, Ahmad; Arafat, Tawfiq; Schmitz, Oliver J

    2016-09-01

    Nicotine (Nic) distribution in human fluids and tissues has a deleterious effect on human health. In addition to its poisoning profile, Nic may contribute to the particular impact of smoking on human reproduction. Although present in seminal fluid, still nobody knows whether nicotine is available in sperm or not. Herein, we developed and validated a new bioanalytical method, for simultaneous determination of Nic, cotinine (Cot), and nicotine N'-oxide (Nox) in human plasma, semen, and sperm by LC-ESI-orbitrap-MS. Blood and semen samples were collected from 12 healthy smoking volunteers in this study. Sperm bodies were then separated quantitatively from 1 mL of semen samples by centrifugation. The developed method was fully validated for plasma following European and American guidelines for bioanalytical method validation, and partial validation was applied to semen analysis. Plasma, semen, and sperm samples were treated by trichloroacetic acid solution for protein direct precipitation in single extraction step. The established calibration range for Nic and Nox in plasma and semen was linear between 5 and 250 ng/mL, and for Cot between 10 and 500 ng/mL. Nic and Cot were detected in human sperm at concentrations as high as in plasma. In addition, Nox was present in semen and sperm but not in plasma. Graphical abstract Nicotine correlation between plasma and semen a; Nicotine correlation between semen and sperm c; Cotinine correlation between plasma and semen b; Cotinine correlation between semen and sperm d.

  11. Total focusing method with correlation processing of antenna array signals

    NASA Astrophysics Data System (ADS)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  12. [Surface electromyography signal classification using gray system theory].

    PubMed

    Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai

    2004-12-01

    A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.

  13. Integrative two-dimensional correlation spectroscopy (i2DCOS) for the intuitive identification of adulterated herbal materials

    NASA Astrophysics Data System (ADS)

    Chen, Jianbo; Wang, Yue; Rong, Lixin; Wang, Jingjuan

    2018-07-01

    IR, Raman and other separation-free and label-free spectroscopic techniques have been the promising methods for the rapid and low-cost quality control of complex mixtures such as food and herb. However, as the overlapped signals from different ingredients usually make it difficult to extract useful information, chemometrics tools are often needed to find out spectral features of interest. With designed perturbations, two-dimensional correlation spectroscopy (2DCOS) is a powerful technique to resolve the overlapped spectral bands and enhance the apparent spectral resolution. In this research, the integrative two-dimensional correlation spectroscopy (i2DCOS) is defined for the first time overcome some disadvantages of synchronous and asynchronous correlation spectra for identification. The integrative 2D correlation spectra weight the asynchronous cross peaks by the corresponding synchronous cross peaks, which combines the signal-to-noise ratio advantage of synchronous correlation spectra and the spectral resolution advantage of asynchronous correlation spectra. The feasibility of the integrative 2D correlation spectra for the quality control of complex mixtures is examined by the identification of adulterated Fritillariae Bulbus powders. Compared with model-based pattern recognition and multivariate calibration methods, i2DCOS can provide intuitive identification results but not require the number of samples. The results show the potential of i2DCOS in the intuitive quality control of herbs and other complex mixtures, especially when the number of samples is not large.

  14. On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.

    PubMed

    Haneuse, Sebastien; Rivera-Rodriguez, Claudia

    2018-01-01

    In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.

  15. Performance evaluation of the Arkray Adams HA-8160 HbA1c analyser.

    PubMed

    Thevarajah, T Malathi; Nani, Nordin; Chew, Y Y

    2008-12-01

    HbA1c measurement is currently routinely used to predict long term outcome of diabetes, thus playing a fundamental role in the management of diabetes. The relationship between HbA1c value and long term diabetic complications has been established by a randomised control Diabetes Control and Complications Trial (DCCT) which used high performance liquid chromatography (HPLC) as a reference method for HbA1c assay. To ensure that HbA1c results from a variety HbA1c assay methods are similar to the DCCT values, the American Diabetes Association (ADA) recommended that all laboratories should use methods certified by the National Glycohemoglobin Standardization Programme (NGSP) with interassay coefficient variation (CV) of < 5% (ideally < 3%). The International Federation of Clinical Chemistry (IFCC) working group on HbA1c standardisation has set a CV < 2.5% as a criteria for its reference laboratories. To evaluate the performance of Arkray Adams HA-8160 HbA1c analyser which uses a cation exchange HPLC method and its correlation to HbA1c assay on Cobas Integra 800 which is an immunoturbidimetric method. For the imprecision study, patient samples and control material of two levels were analysed on HA-8160 analyser 20 times in a single run (within-run imprecision) and twice a day on five consecutive days (between-run imprecision). For the recovery study, two samples each with high and low values were selected and mixed in ratios of 1:3, 1:1 and 3:1, and were analysed by HA-8160. Sixty samples were analysed by both Cobas Integra 800 and HA-8160 for method comparison study. Ten uraemic samples and ten thalassaemic samples were assayed on Cobas Integra 800 and HA 8160 for interference study. Within-run CVs were 0.6% and 0.7% for medium and high value samples respectively, 0.6% and 0.7% for low and high level controls respectively. Between-run CVs were 0.5% and 0.4% for medium and high value samples respectively, 0.5% and 0.6% for low and high level controls respectively. The mean recovery was 100.1%. A good correlation between the 2 methods (Adams = 1.00 Cobas - 0.11, r = 0.98) was observed. The Akray Adams HA-8160 HbA1c analyser performed within the target CV of < 2.5% and showed a good correlation with the Cobas Integra 800.

  16. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion

    PubMed Central

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-01-01

    Introduction Crowdsourcing has become an increasingly important tool to address many problems – from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. Methods We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. Results The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14–16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank correlation coefficient = 0.95; IQR 0.94–0.96). Conclusions Our analyses suggest that the collective opinion of an expert group on a large number of research ideas, expressed through categorical variables (Yes/No/Not Sure/Don't know), stabilises relatively quickly in terms of identifying the ideas that have most support. In the exercise we found a high degree of reproducibility of the identified research priorities was achieved with as few as 45–55 experts. PMID:27350874

  17. Acoustic emission strand burning technique for motor burning rate prediction

    NASA Technical Reports Server (NTRS)

    Christensen, W. N.

    1978-01-01

    An acoustic emission (AE) method is being used to measure the burning rate of solid propellant strands. This method has a precision of 0.5% and excellent burning rate correlation with both subscale and large rocket motors. The AE procedure burns the sample under water and measures the burning rate from the acoustic output. The acoustic signal provides a continuous readout during testing, which allows complete data analysis rather than the start-stop clockwires used by the conventional method. The AE method helps eliminate such problems as inhibiting the sample, pressure increase and temperature rise, during testing.

  18. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  19. Enumeration of Escherichia coli in swab samples from pre- and post-chilled pork and lamb carcasses using 3M™ Petrifilm™ Select E. coli and Simplate® Coliforms/E. coli.

    PubMed

    Hauge, Sigrun J; Østensvik, Øyvin; Monshaugen, Marte; Røtterud, Ole-Johan; Nesbakken, Truls; Alvseike, Ole

    2017-08-01

    The aim of the study was to compare two analytical methods; 3M Petrifilm™ Select E. coli and SimPlate® Coliforms &E. coli, for detection and enumeration of E. coli using swab samples from naturally contaminated pork and lamb carcasses that were collected before and after chilling. Blast chilling was used for pork carcasses. Swab samples (n=180) were collected from 60 warm and 60 chilled pork carcasses, and 30 warm and 30 chilled lamb carcasses, and analysed in parallel. The concordance correlation coefficient between Petrifilm and SimPlate was 0.89 for pork and 0.81 for lamb carcasses. However, the correlation was higher for warm carcasses (0.90) than chilled carcasses (0.72). For chilled lamb carcasses, the correlation was only 0.50, and SimPlate gave slightly higher results than Petrifilm (P=0.09). Slower chilling gave slightly lesser agreement between methods than for blast chilling, however, both Petrifilm and SimPlate methodologies are suitable and recommended for use in small laboratories in abattoirs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Age estimation based on aspartic acid racemization in human sclera.

    PubMed

    Klumb, Karolin; Matzenauer, Christian; Reckert, Alexandra; Lehmann, Klaus; Ritz-Timme, Stefanie

    2016-01-01

    Age estimation based on racemization of aspartic acid residues (AAR) in permanent proteins has been established in forensic medicine for years. While dentine is the tissue of choice for this molecular method of age estimation, teeth are not always available which leads to the need to identify other suitable tissues. We examined the suitability of total tissue samples of human sclera for the estimation of age at death. Sixty-five samples of scleral tissue were analyzed. The samples were hydrolyzed and after derivatization, the extent of aspartic acid racemization was determined by gas chromatography. The degree of AAR increased with age. In samples from younger individuals, the correlation of age and D-aspartic acid content was closer than in samples from older individuals. The age-dependent racemization in total tissue samples proves that permanent or at least long-living proteins are present in scleral tissue. The correlation of AAR in human sclera and age at death is close enough to serve as basis for age estimation. However, the precision of age estimation by this method is lower than that of age estimation based on the analysis of dentine which is due to molecular inhomogeneities of total tissue samples of sclera. Nevertheless, the approach may serve as a valuable alternative or addition in exceptional cases.

  1. Demosaicing images from colour cameras for digital image correlation

    NASA Astrophysics Data System (ADS)

    Forsey, A.; Gungor, S.

    2016-11-01

    Digital image correlation is not the intended use for consumer colour cameras, but with care they can be successfully employed in such a role. The main obstacle is the sparsely sampled colour data caused by the use of a colour filter array (CFA) to separate the colour channels. It is shown that the method used to convert consumer camera raw files into a monochrome image suitable for digital image correlation (DIC) can have a significant effect on the DIC output. A number of widely available software packages and two in-house methods are evaluated in terms of their performance when used with DIC. Using an in-plane rotating disc to produce a highly constrained displacement field, it was found that the bicubic spline based in-house demosaicing method outperformed the other methods in terms of accuracy and aliasing suppression.

  2. Model Identification of Integrated ARMA Processes

    ERIC Educational Resources Information Center

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  3. Assessment of exposure to oak wood dust using gallic acid as a chemical marker.

    PubMed

    Carrieri, Mariella; Scapellato, Maria Luisa; Salamon, Fabiola; Gori, Giampaolo; Trevisan, Andrea; Bartolucci, Giovanni Battista

    2016-01-01

    The American Conference of Governmental Industrial Hygienists (ACGIH) has classified oak dust as a human carcinogen (A1), based on increased sinus and nasal cancer rates among exposed workers. The aims of this study were to investigate the use of gallic acid (GA) as a chemical marker of occupational exposure to oak dusts, to develop a high-performance liquid chromatography-diode array detector method to quantify GA and to apply the method in the analysis of oak dust samples collected in several factories. A high-performance liquid chromatography method was developed to detect GA in oak wood dust. The method was tested in the field, and GA was extracted from inhalable oak wood dust collected using the Institute of Occupational Medicine inhalable dust sampler in the air of five woodworking plants where only oak wood is used. A total of 57 samples with dust concentrations in the range of 0.27-11.14 mg/m(3) were collected. Five of these samples exceeded the Italian threshold limit value of 5 mg/m(3), and 30 samples exceeded the ACGIH TLV of 1 mg/m(3). The GA concentrations were in the range 0.02-4.18 µg/m(3). The total oak dust sampled was correlated with the GA content with a correlation coefficient (r) of 0.95. The GA in the tannic extracts of oak wood may be considered a good marker for this type of wood, and its concentration in wood dust sampled in the work environment is useful in assessing the true exposure to carcinogenic oak dust.

  4. A novel multi-target regression framework for time-series prediction of drug efficacy.

    PubMed

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-18

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.

  5. A novel multi-target regression framework for time-series prediction of drug efficacy

    PubMed Central

    Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin

    2017-01-01

    Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186

  6. Image analysis of pubic bone for age estimation in a computed tomography sample.

    PubMed

    López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella

    2015-03-01

    Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.

  7. Evaluation of Fiber Reinforced Cement Using Digital Image Correlation

    PubMed Central

    Melenka, Garrett W.; Carey, Jason P.

    2015-01-01

    The effect of short fiber reinforcements on the mechanical properties of cement has been examined using a splitting tensile – digital image correlation (DIC) measurement method. Three short fiber reinforcement materials have been used in this study: fiberglass, nylon, and polypropylene. The method outlined provides a simple experimental setup that can be used to evaluate the ultimate tensile strength of brittle materials as well as measure the full field strain across the surface of the splitting tensile test cylindrical specimen. Since the DIC measurement technique is a contact free measurement this method can be used to assess sample failure. PMID:26039590

  8. Out-of-plane ultrasonic velocity measurement

    DOEpatents

    Hall, M.S.; Brodeur, P.H.; Jackson, T.G.

    1998-07-14

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.

  9. Out-of-plane ultrasonic velocity measurement

    DOEpatents

    Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.

    1998-01-01

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.

  10. The finite element method for micro-scale modeling of ultrasound propagation in cancellous bone.

    PubMed

    Vafaeian, B; El-Rich, M; El-Bialy, T; Adeeb, S

    2014-08-01

    Quantitative ultrasound for bone assessment is based on the correlations between ultrasonic parameters and the properties (mechanical and physical) of cancellous bone. To elucidate the correlations, understanding the physics of ultrasound in cancellous bone is demanded. Micro-scale modeling of ultrasound propagation in cancellous bone using the finite-difference time-domain (FDTD) method has been so far utilized as one of the approaches in this regard. However, the FDTD method accompanies two disadvantages: staircase sampling of cancellous bone by finite difference grids leads to generation of wave artifacts at the solid-fluid interface inside the bone; additionally, this method cannot explicitly satisfy the needed perfect-slip conditions at the interface. To overcome these disadvantages, the finite element method (FEM) is proposed in this study. Three-dimensional finite element models of six water-saturated cancellous bone samples with different bone volume were created. The values of speed of sound (SOS) and broadband ultrasound attenuation (BUA) were calculated through the finite element simulations of ultrasound propagation in each sample. Comparing the results with other experimental and simulation studies demonstrated the capabilities of the FEM for micro-scale modeling of ultrasound in water-saturated cancellous bone. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Comparison of dietary histories and seven-day food records in a nutritional assessment of older adults.

    PubMed

    Mahalko, J R; Johnson, L K; Gallagher, S K; Milne, D B

    1985-09-01

    Dietary histories and seven-day food records were obtained for 54 apparently healthy older adults. The two dietary methods correlated for most nutrients, but mean differences were significant for several nutrients. Intakes below recommended levels occurred most frequently for energy, calcium, and zinc. Biochemical evidence of thiamin and riboflavin deficiency was unexpectedly frequent. Using food records, dietary iron correlated with serum ferritin. Using dietary histories, dietary protein correlated with serum albumin, and dietary zinc correlated with plasma zinc. Using either dietary method, plasma ascorbate was associated positively with both dietary ascorbate and ascorbate supplements, and negatively with cigarette smoking. Use of thiamin- or folate-containing supplements was associated with improved biochemical status for the respective vitamin. Though neither dietary histories nor food records give precise intake data for individuals, either method may be useful for epidemiologic studies with appropriate sample sizes.

  12. Analysis of coke beverages by total-reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Fernández-Ruiz, Ramón; von Bohlen, Alex; Friedrich K, E. Josue; Redrejo, M. J.

    2018-07-01

    The influence of the organic content, sample preparation process and the morphology of the depositions of two types of Coke beverage, traditional and light Coke, have been investigated by mean of Total-reflection X-ray Fluorescence (TXRF) spectrometry. Strong distortions of the nominal concentration values, up to 128% for P, have been detected in the analysis of traditional Coke by different preparation methods. These differences have been correlated with the edge X-ray energies of the elements analyzed being more pronounced for the lighter elements. The influence of the organic content (mainly sugar) was evaluated comparing traditional and light Coke analytical TXRF results. Three sample preparation methods have been evaluated as follows: direct TXRF analysis of the sample only adding internal standard, TXRF analysis after open vessel acid digestion and TXRF analysis after high pressure and temperature microwave-assisted acid digestion. Strong correlations were detected between quantitative results, methods of preparation and energies of the X-ray absorption edges of quantified elements. In this way, a decay behavior for the concentration differences between preparation methods and the energies of the X-ray absorption edges of each element were observed. The observed behaviors were modeled with exponential decay functions obtaining R2 correlation coefficients from 0.989 to 0.992. The strong absorption effect observed, and even possible matrix effect, can be explained by the inherent high organic content of the evaluated samples and also by the morphology and average thickness of the TXRF depositions observed. As main conclusion of this work, the analysis of light elements in samples with high organic content by TXRF, i.e. medical, biological, food or any other organic matrixes should be taken carefully. In any case, the direct analysis is not recommended and a previous microwave-assisted acid digestion, or similar, is mandatory, for the correct elemental quantification by TXRF.

  13. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    PubMed

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Use of Complementary Medicine in Older Americans: Results from the Health and Retirement Study

    ERIC Educational Resources Information Center

    Ness, Jose; Cirillo, Dominic J.; Weir, David R.; Nisly, Nicole L.; Wallace, Robert B.

    2005-01-01

    Purpose: The correlates of complementary and alternative medicine (CAM) utilization among elders have not been fully investigated. This study was designed to identify such correlates in a large sample of older adults, thus generating new data relevant to consumer education, medical training, and health practice and policy. Design and Methods: A…

  15. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  16. [Determination of ethylene glycol in biological fluids--propylene glycol interferences].

    PubMed

    Gomółka, Ewa; Cudzich-Czop, Sylwia; Sulka, Adrianna

    2013-01-01

    Many laboratories in Poland do not use gas chromatography (GC) method for determination of ethylene glycol (EG) and methanol in blood of poisoned patients, they use non specific spectrophotometry methods. One of the interfering substances is propylene glycol (PG)--compound present in many medical and cosmetic products: drops, air freshens, disinfectants, electronic cigarettes and others. In Laboratory of Analytical Toxicology and Drug Monitoring in Krakow determination of EG is made by GC method. The method enables to distinguish and make resolution of (EG) and (PG) in biological samples. In the years 2011-2012 in several serum samples from diagnosed patients PG was present in concentration from several to higher than 100 mg/dL. The aim of the study was to estimate PG interferences of serum EG determination by spectrophotometry method. Serum samples containing PG and EG were used in the study. The samples were analyzed by two methods: GC and spectrophotometry. Results of serum samples spiked with PG with no EG analysed by spectrophotometry method were improper ("false positive"). The results were correlated to PG concentration in samples. Calculated cross-reactivity of PG in the method was 42%. Positive results of EG measured by spectrophotometry method must be confirmed by reference GC method. Spectrophotometry method shouldn't be used for diagnostics and monitoring of patients poisoned by EG.

  17. Relationship between dental calcification and skeletal maturation in a Peruvian sample.

    PubMed

    Lecca-Morales, Rocío M; Carruitero, Marcos J

    2017-01-01

    the objective of the study was to determine the relationship between dental calcification stages and skeletal maturation in a Peruvian sample. panoramic, cephalometric and carpal radiographs of 78 patients (34 girls and 44 boys) between 7 and 17 years old (9.90 ± 2.5 years) were evaluated. Stages of tooth calcification of the mandibular canine, first premolar, second premolar, and second molar and the skeletal maturation with a hand-wrist and a cervical vertebrae method were assessed. The relationships between the stages were assessed using Spearman's correlation coefficient. Additionally, the associations of mandibular and pubertal growth peak stages with tooth calcification were evaluated by Fisher's exact test. all teeth showed positive and statistically significant correlations, the highest correlation was between the mandibular second molar calcification stages with hand-wrist maturation stages (r = 0.758, p < 0.001) and with vertebrae cervical maturation stages (r = 0.605, p < 0.001). The pubertal growth spurt was found in the G stage of calcification of the second mandibular molar, and the mandibular growth peak was found in the F stage of calcification of the second molar. there was a positive relationship between dental calcification stages and skeletal maturation stages by hand-wrist and cervical vertebrae methods in the sample studied. Dental calcification stages of the second mandibular molar showed the highest positive correlation with the hand-wrist and cervical vertebrae stages.

  18. A study of correlations between crude oil spot and futures markets: A rolling sample test

    NASA Astrophysics Data System (ADS)

    Liu, Li; Wan, Jieqiu

    2011-10-01

    In this article, we investigate the asymmetries of exceedance correlations and cross-correlations between West Texas Intermediate (WTI) spot and futures markets. First, employing the test statistic proposed by Hong et al. [Asymmetries in stock returns: statistical tests and economic evaluation, Review of Financial Studies 20 (2007) 1547-1581], we find that the exceedance correlations were overall symmetric. However, the results from rolling windows show that some occasional events could induce the significant asymmetries of the exceedance correlations. Second, employing the test statistic proposed by Podobnik et al. [Quantifying cross-correlations using local and global detrending approaches, European Physics Journal B 71 (2009) 243-250], we find that the cross-correlations were significant even for large lagged orders. Using the detrended cross-correlation analysis proposed by Podobnik and Stanley [Detrended cross-correlation analysis: a new method for analyzing two nonstationary time series, Physics Review Letters 100 (2008) 084102], we find that the cross-correlations were weakly persistent and were stronger between spot and futures contract with larger maturity. Our results from rolling sample test also show the apparent effects of the exogenous events. Additionally, we have some relevant discussions on the obtained evidence.

  19. A convergence algorithm for correlation of breech face images based on the congruent matching cells (CMC) method.

    PubMed

    Chen, Zhe; Song, John; Chu, Wei; Soons, Johannes A; Zhao, Xuezeng

    2017-11-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for accurate firearm evidence identification and error rate estimation. The CMC method is based on the principle of discretization. The toolmark image of the reference sample is divided into correlation cells. Each cell is registered to the cell-sized area of the compared image that has maximum surface topography similarity. For each resulting cell pair, one parameter quantifies the similarity of the cell surface topography and three parameters quantify the pattern congruency of the registration position and orientation. An identification (declared match) requires a significant number of CMCs, that is, cell pairs that meet both similarity and pattern congruency requirements. The use of cell correlations reduces the effects of "invalid regions" in the compared image pairs and increases the correlation accuracy. The identification accuracy of the CMC method can be further improved by considering a feature named "convergence," that is, the tendency of the x-y registration positions of the correlated cell pairs to converge at the correct registration angle when comparing same-source samples at different relative orientations. In this paper, the difference of the convergence feature between known matching (KM) and known non-matching (KNM) image pairs is characterized, based on which an improved algorithm is developed for breech face image correlations using the CMC method. Its advantage is demonstrated by comparison with three existing CMC algorithms using four datasets. The datasets address three different brands of consecutively manufactured pistol slides, with significant differences in the distribution overlap of cell pair topography similarity for KM and KNM image pairs. For the same CMC threshold values, the convergence algorithm demonstrates noticeably improved results by reducing the number of false-positive or false-negative CMCs in a comparison. Published by Elsevier B.V.

  20. Accurate EPR radiosensitivity calibration using small sample masses

    NASA Astrophysics Data System (ADS)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  1. Carbon isotope analyses of n-alkanes released from rapid pyrolysis of oil asphaltenes in a closed system.

    PubMed

    Chen, Shasha; Jia, Wanglu; Peng, Ping'an

    2016-08-15

    Carbon isotope analysis of n-alkanes produced by the pyrolysis of oil asphaltenes is a useful tool for characterizing and correlating oil sources. Low-temperature (320-350°C) pyrolysis lasting 2-3 days is usually employed in such studies. Establishing a rapid pyrolysis method is necessary to reduce the time taken for the pretreatment process in isotope analyses. One asphaltene sample was pyrolyzed in sealed ampoules for different durations (60-120 s) at 610°C. The δ(13) C values of the pyrolysates were determined by gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS). The molecular characteristics and isotopic signatures of the pyrolysates were investigated for the different pyrolysis durations and compared with results obtained using the normal pyrolysis method, to determine the optimum time interval. Several asphaltene samples derived from various sources were analyzed using this method. The asphaltene pyrolysates of each sample were similar to those obtained by the flash pyrolysis method on similar samples. However, the molecular characteristics of the pyrolysates obtained over durations longer than 90 s showed intensified secondary reactions. The carbon isotopic signatures of individual compounds obtained at pyrolysis durations less than 90 s were consistent with those obtained from typical low-temperature pyrolysis. Several asphaltene samples from various sources released n-alkanes with distinct carbon isotopic signatures. This easy-to-use pyrolysis method, combined with a subsequent purification procedure, can be used to rapidly obtain clean n-alkanes from oil asphaltenes. Carbon isotopic signatures of n-alkanes released from oil asphaltenes from different sources demonstrate the potential application of this method in 'oil-oil' and 'oil-source' correlations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    PubMed

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  3. Comparative Performance of Reagents and Platforms for Quantitation of Cytomegalovirus DNA by Digital PCR

    PubMed Central

    Gu, Z.; Sam, S. S.; Sun, Y.; Tang, L.; Pounds, S.; Caliendo, A. M.

    2016-01-01

    A potential benefit of digital PCR is a reduction in result variability across assays and platforms. Three sets of PCR reagents were tested on two digital PCR systems (Bio-Rad and RainDance), using three different sets of PCR reagents for quantitation of cytomegalovirus (CMV). Both commercial quantitative viral standards and 16 patient samples (n = 16) were tested. Quantitative accuracy (compared to nominal values) and variability were determined based on viral standard testing results. Quantitative correlation and variability were assessed with pairwise comparisons across all reagent-platform combinations for clinical plasma sample results. The three reagent sets, when used to assay quantitative standards on the Bio-Rad system, all showed a high degree of accuracy, low variability, and close agreement with one another. When used on the RainDance system, one of the three reagent sets appeared to have a much better correlation to nominal values than did the other two. Quantitative results for patient samples showed good correlation in most pairwise comparisons, with some showing poorer correlations when testing samples with low viral loads. Digital PCR is a robust method for measuring CMV viral load. Some degree of result variation may be seen, depending on platform and reagents used; this variation appears to be greater in samples with low viral load values. PMID:27535685

  4. [Fast determination of induction period of motor gasoline using Fourier transform attenuated total reflection infrared spectroscopy].

    PubMed

    Liu, Ya-Fei; Yuan, Hong-Fu; Song, Chun-Feng; Xie, Jin-Chun; Li, Xiao-Yu; Yan, De-Lin

    2014-11-01

    A new method is proposed for the fast determination of the induction period of gasoline using Fourier transform attenuated total reflection infrared spectroscopy (ATR-FTIR). A dedicated analysis system with the function of spectral measurement, data processing, display and storage was designed and integrated using a Fourier transform infrared spectrometer module and chemometric software. The sample presentation accessory designed which has advantages of constant optical path, convenient sample injection and cleaning is composed of a nine times reflection attenuated total reflectance (ATR) crystal of zinc selenide (ZnSe) coated with a diamond film and a stainless steel lid with sealing device. The influence of spectral scanning number and repeated sample loading times on the spectral signal-to-noise ratio was studied. The optimum spectral scanning number is 15 times and the optimum sample loading number is 4 times. Sixty four different gasoline samples were collected from the Beijing-Tianjin area and the induction period values were determined as reference data by standard method GB/T 8018-87. The infrared spectra of these samples were collected in the operating condition mentioned above using the dedicated fast analysis system. Spectra were pretreated using mean centering and 1st derivative to reduce the influence of spectral noise and baseline shift A PLS calibration model for the induction period was established by correlating the known induction period values of the samples with their spectra. The correlation coefficient (R2), standard error of calibration (SEC) and standard error of prediction (SEP) of the model are 0.897, 68.3 and 91.9 minutes, respectively. The relative deviation of the model for gasoline induction period prediction is less than 5%, which meets the requirements of repeatability tolerance in GB method. The new method is simple and fast. It takes no more than 3 minutes to detect one sample. Therefore, the method is feasible for implementing fast determination of gasoline induction period, and of a positive meaning in the evaluation of fuel quality.

  5. Optimizing disk registration algorithms for nanobeam electron diffraction strain mapping

    DOE PAGES

    Pekin, Thomas C.; Gammer, Christoph; Ciston, Jim; ...

    2017-01-28

    Scanning nanobeam electron diffraction strain mapping is a technique by which the positions of diffracted disks sampled at the nanoscale over a crystalline sample can be used to reconstruct a strain map over a large area. However, it is important that the disk positions are measured accurately, as their positions relative to a reference are directly used to calculate strain. Here in this study, we compare several correlation methods using both simulated and experimental data in order to directly probe susceptibility to measurement error due to non-uniform diffracted disk illumination structure. We found that prefiltering the diffraction patterns with amore » Sobel filter before performing cross correlation or performing a square-root magnitude weighted phase correlation returned the best results when inner disk structure was present. Lastly, we have tested these methods both on simulated datasets, and experimental data from unstrained silicon as well as a twin grain boundary in 304 stainless steel.« less

  6. PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets

    PubMed Central

    Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.

    2016-01-01

    Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601

  7. PCAN: Probabilistic correlation analysis of two non-normal data sets.

    PubMed

    Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J

    2016-12-01

    Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.

  8. Factors associated with experiences of stigma in a sample of HIV-positive, methamphetamine-using men who have sex with men

    PubMed Central

    Semple, Shirley J.; Strathdee, Steffanie A.; Zians, Jim; Patterson, Thomas L.

    2012-01-01

    Background While methamphetamine users report high rates of internalized or self-stigma, few studies have examined experiences of stigma (i.e., stigmatization by others) and its correlates. Methods This study identified correlates of stigma experiences in a sample of 438 HIV-positive men who have sex with men (MSM) who were enrolled in a sexual risk reduction intervention in San Diego, CA. Results Approximately 96% of the sample reported experiences of stigma related to their use of methamphetamine. In multiple regression analysis, experiences of stigma were associated with binge use of methamphetamine, injection drug use, increased anger symptoms, reduced emotional support, and lifetime treatment for methamphetamine use. Conclusions These findings suggest that experiences of stigma are common among methamphetamine users and that interventions to address this type of stigma and its correlates may offer social, psychological, and health benefits to HIV-positive methamphetamine-using MSM. PMID:22572209

  9. Maternal Methadone Dose, Placental Methadone Concentrations, and Neonatal Outcomes

    PubMed Central

    de Castro, Ana; Jones, Hendreé E.; Johnson, Rolley E.; Gray, Teresa R.; Shakleya, Diaa M.; Huestis, Marilyn A.

    2015-01-01

    BACKGROUND Few investigations have used placenta as an alternative matrix to detect in utero drug exposure, despite its availability at the time of birth and the large amount of sample. Methadone-maintained opioid-dependent pregnant women provide a unique opportunity to examine the placental disposition of methadone and metabolite [2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP)], to explore their correlations with maternal methadone dose and neonatal outcomes, and to test the ability to detect in utero exposure to illicit drugs. METHODS We calculated the correlations of placental methadone and EDDP concentrations and their correlations with maternal methadone doses and neonatal outcomes. Cocaine- and opiate-positive placenta results were compared with the results for meconium samples and for urine samples collected throughout gestation. RESULTS Positive correlations were found between placental methadone and EDDP concentrations (r = 0.685), and between methadone concentration and methadone dose at delivery (r = 0.542), mean daily dose (r = 0.554), mean third-trimester dose (r = 0.591), and cumulative daily dose (r = 0.639). The EDDP/methadone concentration ratio was negatively correlated with cumulative daily dose (r = 0.541) and positively correlated with peak neonatal abstinence syndrome (NAS) score (r = 0.513). Placental EDDP concentration was negatively correlated with newborn head circumference (r = 0.579). Cocaine and opiate use was detected in far fewer placenta samples than in thrice-weekly urine and meconium samples, a result suggesting a short detection window for placenta. CONCLUSIONS Quantitative methadone and EDDP measurement may predict NAS severity. The placenta reflects in utero drug exposure for a shorter time than meconium but may be useful when meconium is unavailable or if documentation of recent exposure is needed. PMID:21245372

  10. Correcting for the influence of sampling conditions on biomarkers of exposure to phenols and phthalates: a 2-step standardization method based on regression residuals.

    PubMed

    Mortamais, Marion; Chevrier, Cécile; Philippat, Claire; Petit, Claire; Calafat, Antonia M; Ye, Xiaoyun; Silva, Manori J; Brambilla, Christian; Eijkemans, Marinus J C; Charles, Marie-Aline; Cordier, Sylvaine; Slama, Rémy

    2012-04-26

    Environmental epidemiology and biomonitoring studies typically rely on biological samples to assay the concentration of non-persistent exposure biomarkers. Between-participant variations in sampling conditions of these biological samples constitute a potential source of exposure misclassification. Few studies attempted to correct biomarker levels for this error. We aimed to assess the influence of sampling conditions on concentrations of urinary biomarkers of select phenols and phthalates, two widely-produced families of chemicals, and to standardize biomarker concentrations on sampling conditions. Urine samples were collected between 2002 and 2006 among 287 pregnant women from Eden and Pélagie cohorts, from which phthalates and phenols metabolites levels were assayed. We applied a 2-step standardization method based on regression residuals. First, the influence of sampling conditions (including sampling hour, duration of storage before freezing) and of creatinine levels on biomarker concentrations were characterized using adjusted linear regression models. In the second step, the model estimates were used to remove the variability in biomarker concentrations due to sampling conditions and to standardize concentrations as if all samples had been collected under the same conditions (e.g., same hour of urine collection). Sampling hour was associated with concentrations of several exposure biomarkers. After standardization for sampling conditions, median concentrations differed by--38% for 2,5-dichlorophenol to +80 % for a metabolite of diisodecyl phthalate. However, at the individual level, standardized biomarker levels were strongly correlated (correlation coefficients above 0.80) with unstandardized measures. Sampling conditions, such as sampling hour, should be systematically collected in biomarker-based studies, in particular when the biomarker half-life is short. The 2-step standardization method based on regression residuals that we proposed in order to limit the impact of heterogeneity in sampling conditions could be further tested in studies describing levels of biomarkers or their influence on health.

  11. An evaluation of Brix refractometry instruments for measurement of colostrum quality in dairy cattle.

    PubMed

    Bielmann, V; Gillan, J; Perkins, N R; Skidmore, A L; Godden, S; Leslie, K E

    2010-08-01

    Acquisition of high quality colostrum is an important factor influencing neonatal calf health. Many methods have been used to assess the Ig concentration of colostrum; however, improved, validated evaluation tools are needed. The aims of this study were to evaluate both optical and digital Brix refractometer instruments for the measurement of Ig concentration of colostrum as compared with the gold standard radial immunodiffusion assay laboratory assessment and to determine the correlation between Ig measurements taken from fresh and frozen colostrum samples for both Brix refractometer instruments. This research was completed using 288 colostrum samples from 3 different farms. It was concluded that the optical and digital Brix refractometers were highly correlated for both fresh and frozen samples (r=0.98 and r=0.97, respectively). Correlation between both refractometer instruments for fresh and frozen samples and the gold standard radial immunodiffusion assay were determined to be very similar, with a correlation coefficient between 0.71 and 0.74. Both instruments exhibited excellent test characteristics, indicating an appropriate cut-off point of 22% Brix score for the identification of good quality colostrum. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Studying flow close to an interface by total internal reflection fluorescence cross-correlation spectroscopy: Quantitative data analysis

    NASA Astrophysics Data System (ADS)

    Schmitz, R.; Yordanov, S.; Butt, H. J.; Koynov, K.; Dünweg, B.

    2011-12-01

    Total internal reflection fluorescence cross-correlation spectroscopy (TIR-FCCS) has recently [S. Yordanov , Optics ExpressOPEXFF1094-408710.1364/OE.17.021149 17, 21149 (2009)] been established as an experimental method to probe hydrodynamic flows near surfaces, on length scales of tens of nanometers. Its main advantage is that fluorescence occurs only for tracer particles close to the surface, thus resulting in high sensitivity. However, the measured correlation functions provide only rather indirect information about the flow parameters of interest, such as the shear rate and the slip length. In the present paper, we show how to combine detailed and fairly realistic theoretical modeling of the phenomena by Brownian dynamics simulations with accurate measurements of the correlation functions, in order to establish a quantitative method to retrieve the flow properties from the experiments. First, Brownian dynamics is used to sample highly accurate correlation functions for a fixed set of model parameters. Second, these parameters are varied systematically by means of an importance-sampling Monte Carlo procedure in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for massively parallel computers, which allows us to do the data analysis within moderate computing times. The method is applied to flow near a hydrophilic surface, where the slip length is observed to be smaller than 10nm, and, within the limitations of the experiments and the model, indistinguishable from zero.

  13. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts - Methods and Systematics Characterization

    DOE PAGES

    Gatti, M.

    2018-02-22

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sample with redMaGiC galaxies (luminous red galaxies with secure photometric red- shifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We also apply the method to three photo-z codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhoodmore » Fitting (DNF), and Random Forest-based photo-z (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-z's. The systematic uncertainty in the mean redshift bias of the source galaxy sample is z ≲ 0.02, though the precise value depends on the redshift bin under consideration. Here, we discuss possible ways to mitigate the impact of our dominant systematics in future analyses.« less

  14. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts - Methods and Systematics Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gatti, M.

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sample with redMaGiC galaxies (luminous red galaxies with secure photometric red- shifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We also apply the method to three photo-z codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhoodmore » Fitting (DNF), and Random Forest-based photo-z (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-z's. The systematic uncertainty in the mean redshift bias of the source galaxy sample is z ≲ 0.02, though the precise value depends on the redshift bin under consideration. Here, we discuss possible ways to mitigate the impact of our dominant systematics in future analyses.« less

  15. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  16. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  17. A step forward in the study of the electroerosion by optical methods

    NASA Astrophysics Data System (ADS)

    Aparicio, R.; Gale, M. F. Ruiz; Hogert, E. N.; Landau, M. R.; Gaggioli, y. N. G.

    2003-05-01

    This work develops two theoretical models of surfaces to explain the behavior of the light scattered by samples that suffers some alteration. In a first model, it is evaluated the mean intensity scattered by the sample, analyzing the different curves obtained as function of the eroded/total surface ratio. The theoretical results are compared with those obtained experimentally. It can be seen that there exists a strong relation between the electroerosion level and the light scattered by the sample. A second model analyzes a surface with random changes in its roughness. A translucent surface with its roughness changing in a controlled way is studied. Then, the correlation coefficient variation as function of the roughness variation is determined by the transmission speckle correlation method. The obtained experimental values are compared with those obtained with this model. In summary, it can be shown that the first- and second-order statistics properties of the transmitted or reflected light by a sample with a variable topography can be taken account as a parameter to analyze these morphologic changes.

  18. Body Adiposity Index Performance in Estimating Body Fat Percentage in Colombian College Students: Findings from the FUPRECOL—Adults Study

    PubMed Central

    Ramírez-Vélez, Robinson; Correa-Bautista, Jorge Enrique; González-Ruíz, Katherine; Vivas, Andrés; Triana-Reina, Héctor Reynaldo; Martínez-Torres, Javier; Prieto-Benavides, Daniel Humberto; Carrillo, Hugo Alejandro; Ramos-Sepúlveda, Jeison Alexander; Villa-González, Emilio; García-Hermoso, Antonio

    2017-01-01

    Recently, a body adiposity index (BAI = (hip circumference)/((height)(1.5))−18) was developed and validated in adult populations. The aim of this study was to evaluate the performance of BAI in estimating percentage body fat (BF%) in a sample of Colombian collegiate young adults. The participants were comprised of 903 volunteers (52% females, mean age = 21.4 years ± 3.3). We used the Lin’s concordance correlation coefficient, linear regression, Bland–Altman’s agreement analysis, concordance correlation coefficient (ρc) and the coefficient of determination (R2) between BAI, and BF%; by bioelectrical impedance analysis (BIA)). The correlation between the two methods of estimating BF% was R2 = 0.384, p < 0.001. A paired-sample t-test showed a difference between the methods (BIA BF% = 16.2 ± 3.1, BAI BF% = 30.0 ± 5.4%; p < 0.001). For BIA, bias value was 6.0 ± 6.2 BF% (95% confidence interval (CI) = −6.0 to 18.2), indicating that the BAI method overestimated BF% relative to the reference method. Lin’s concordance correlation coefficient was poor (ρc = 0.014, 95% CI = −0.124 to 0.135; p = 0.414). In Colombian college students, there was poor agreement between BAI- and BIA-based estimates of BF%, and so BAI is not accurate in people with low or high body fat percentage levels. PMID:28106719

  19. The effects of sample preparation on measured concentrations of eight elements in edible tissues of fish from streams contaminated by lead mining

    USGS Publications Warehouse

    Schmitt, Christopher J.; Finger, Susan E.

    1987-01-01

    The influence of sample preparation on measured concentrations of eight elements in the edible tissues of two black basses (Centrarchidae), two catfishes (Ictaluridae), and the black redhorse,Moxostoma duquesnei (Catostomidae) from two rivers in southeastern Missouri contaminated by mining and related activities was investigated. Concentrations of Pb, Cd, Cu, Zn, Fe, Mn, Ba, and Ca were measured in two skinless, boneless samples of axial muscle from individual fish prepared in a clean room. One sample (normally-processed) was removed from each fish with a knife in a manner typically used by investigators to process fish for elemental analysis and presumedly representative of methods employed by anglers when preparing fish for home consumption. A second sample (clean-processed) was then prepared from each normally-processed sample by cutting away all surface material with acid-cleaned instruments under ultraclean conditions. The samples were analyzed as a single group by atomic absorption spectrophotometry. Of the elements studied, only Pb regularly exceeded current guidelines for elemental contaminants in foods. Concentrations were high in black redhorse from contaminated sites, regardless of preparation method; for the other fishes, whether or not Pb guidelines were exceeded depended on preparation technique. Except for Mn and Ca, concentrations of all elements measured were significantly lower in cleanthan in normally-processed tissue samples. Absolute differences in measured concentrations between clean- and normally-processed samples were most evident for Pb and Ba in bass and catfish and for Cd and Zn in redhorse. Regardless of preparation method, concentrations of Pb, Ca, Mn, and Ba in individual fish were closely correlated; samples that were high or low in one of these four elements were correspondingly high or low in the other three. In contrast, correlations between Zn, Fe, and Cd occurred only in normallyprocessed samples, suggesting that these correlations resulted from high concentrations on the surfaces of some samples. Concentrations of Pb and Ba in edible tissues of fish from contaminated sites were highly correlated with Ca content, which was probably determined largely by the amount of tissue other than muscle in the sample because fish muscle contains relatively little Ca. Accordingly, variation within a group of similar samples can be reduced by normalizing Pb and Ba concentrations to a standard Ca concentration. When sample size (N) is large, this can be accomplished statistically by analysis of covariance; whenN is small, molar ratios of [Pb]/[Ca] and [Ba]/[Ca] can be computed. Without such adjustments, unrealistically large Ns are required to yield statistically reliable estimates of Pb concentrations in edible tissues. Investigators should acknowledge that reported concentrations of certain elements are only estimates, and that regardless of the care exercised during the collection, preparation, and analysis of samples, results should be interpreted with the awareness that contamination from external sources may have occurred.

  20. Evaluation of the validity of a rapid method for measuring high and low haemoglobin levels in whole blood donors.

    PubMed

    Shahshahani, Hayedeh J; Meraat, Nahid; Mansouri, Fatemeh

    2013-07-01

    Haemoglobin screening methods need to be highly sensitive to detect both low and high haemoglobin levels and avoid unnecessary rejection of potential blood donors. The aim of this study was to evaluate the accuracy of measurements by HemoCue in blood donors. Three hundred and fourteen randomly selected, prospective blood donors were studied. Single fingerstick blood samples were obtained to determine the donors' haemoglobin levels by HemoCue, while venous blood samples were drawn for measurement of the haemoglobin level by both HemoCue and an automated haematology analyser as the reference method. The sensitivity, specificity, predictive values and correlation between the reference method and HemoCue were assessed. Cases with a haemoglobin concentration in the range of 12.5-17.9 g/dL were accepted for blood donation. Analysis of paired results showed that haemoglobin levels measured by HemoCue were higher than those measured by the reference method. There was a significant correlation between the reference method and HemoCue for haemoglobin levels less than 12.5 g/dL. The correlation was less strong for increasing haemoglobin levels. Linear correlation was poor for haemoglobin levels over 18 g/dL. Thirteen percent of donors, who had haemoglobin levels close to the upper limit, were unnecessarily rejected. HemoCue is suitable for screening for anaemia in blood donors. Most donors at Yazd are males and a significant percentage of them have haemoglobin values close to the upper limit for acceptance as a blood donor; since these subjects could be unnecessarily rejected on the basis of HemoCue results and testing with this method is expensive, it is recommended that qualitative methods are used for primary screening and accurate quantitative methods used in clinically suspicious cases or when qualitative methods fail.

  1. Ultra-high performance liquid chromatography tandem mass spectrometry for the determination of five glycopeptide antibiotics in food and biological samples using solid-phase extraction.

    PubMed

    Deng, Fenfang; Yu, Hong; Pan, Xinhong; Hu, Guoyuan; Wang, Qiqin; Peng, Rongfei; Tan, Lei; Yang, Zhicong

    2018-02-23

    This paper demonstrated the development and validation of an ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) method for simultaneous determination of five glycopeptide antibiotics in food and biological samples. The target glycopeptide antibiotics were isolated from the samples by solvent extraction, and the extracts were cleaned with a tandem solid-phase extraction step using mixed strong cation exchange and hydrophilic/lipophilic balance cartridges. Subsequently, the analytes were eluted with different solvents, and then quantified by UHPLC-MS/MS in the positive ionization mode with multiple reaction monitoring. Under optimal conditions, good linear correlations were obtained for the five glycopeptide antibiotics in the concentration range of 1.0 μg/L to 20.0 μg/L, and with linear correlation coefficients >0.998. Employing this method, the target glycopeptide antibiotics in food and biological samples were identified with a recovery of 83.0-102%, and a low quantitation limit of 1.0 μg/kg in food and 2.0 μg/L in biological samples with low matrix effects. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Advantage of multiple spot urine collections for estimating daily sodium excretion: comparison with two 24-h urine collections as reference.

    PubMed

    Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi

    2016-02-01

    Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62  mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.

  3. Scanning electron microscopy of bone.

    PubMed

    Boyde, Alan

    2012-01-01

    This chapter described methods for Scanning Electron Microscopical imaging of bone and bone cells. Backscattered electron (BSE) imaging is by far the most useful in the bone field, followed by secondary electrons (SE) and the energy dispersive X-ray (EDX) analytical modes. This chapter considers preparing and imaging samples of unembedded bone having 3D detail in a 3D surface, topography-free, polished or micromilled, resin-embedded block surfaces, and resin casts of space in bone matrix. The chapter considers methods for fixation, drying, looking at undersides of bone cells, and coating. Maceration with alkaline bacterial pronase, hypochlorite, hydrogen peroxide, and sodium or potassium hydroxide to remove cells and unmineralised matrix is described in detail. Attention is given especially to methods for 3D BSE SEM imaging of bone samples and recommendations for the types of resin embedding of bone for BSE imaging are given. Correlated confocal and SEM imaging of PMMA-embedded bone requires the use of glycerol to coverslip. Cathodoluminescence (CL) mode SEM imaging is an alternative for visualising fluorescent mineralising front labels such as calcein and tetracyclines. Making spatial casts from PMMA or other resin embedded samples is an important use of this material. Correlation with other imaging means, including microradiography and microtomography is important. Shipping wet bone samples between labs is best done in glycerol. Environmental SEM (ESEM, controlled vacuum mode) is valuable in eliminating -"charging" problems which are common with complex, cancellous bone samples.

  4. A method to combine remotely sensed and in situ measurements: Program documentation

    NASA Technical Reports Server (NTRS)

    Peck, E. L.; Johnson, E. R.; Wong, M. Y.

    1984-01-01

    All user and programmer information required for using the correlation area method (CAM) program is presented. This program combines measurements of hydrologic variables from all measurement technologies to produce estimated areal mean values. The method accounts for sampling geometries and measurement accuracies and provides a measure of the accuracy of the estimated mean areal value.

  5. Evaluation of Techniques for Measuring Microbial Hazards in Bathing Waters: A Comparative Study

    PubMed Central

    Schang, Christelle; Henry, Rebekah; Kolotelo, Peter A.; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O’Brien, Peter; Coutts, Scott; Deletic, Ana; McCarthy, David T.

    2016-01-01

    Recreational water quality is commonly monitored by means of culture based faecal indicator organism (FIOs) assays. However, these methods are costly and time-consuming; a serious disadvantage when combined with issues such as non-specificity and user bias. New culture and molecular methods have been developed to counter these drawbacks. This study compared industry-standard IDEXX methods (Colilert and Enterolert) with three alternative approaches: 1) TECTA™ system for E. coli and enterococci; 2) US EPA’s 1611 method (qPCR based enterococci enumeration); and 3) Next Generation Sequencing (NGS). Water samples (233) were collected from riverine, estuarine and marine environments over the 2014–2015 summer period and analysed by the four methods. The results demonstrated that E. coli and coliform densities, inferred by the IDEXX system, correlated strongly with the TECTA™ system. The TECTA™ system had further advantages in faster turnaround times (~12 hrs from sample receipt to result compared to 24 hrs); no staff time required for interpretation and less user bias (results are automatically calculated, compared to subjective colorimetric decisions). The US EPA Method 1611 qPCR method also showed significant correlation with the IDEXX enterococci method; but had significant disadvantages such as highly technical analysis and higher operational costs (330% of IDEXX). The NGS method demonstrated statistically significant correlations between IDEXX and the proportions of sequences belonging to FIOs, Enterobacteriaceae, and Enterococcaceae. While costs (3,000% of IDEXX) and analysis time (300% of IDEXX) were found to be significant drawbacks of NGS, rapid technological advances in this field will soon see it widely adopted. PMID:27213772

  6. Digital Correlation Microwave Polarimetry: Analysis and Demonstration

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)

    2000-01-01

    The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.

  7. A new method of regional CBF measurement using one point arterial sampling based on microsphere model with I-123 IMP SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odano, I.; Takahashi, N.; Ohkubo, M.

    1994-05-01

    We developed a new method for quantitative measurement of rCBF with Iodine-123-IMP based on the microsphere model, which was accurate, more simple and relatively non-invasive than the continuous withdrawal method. IMP is assumed to behave as a chemical microsphere in the brain. Then regional CBF is measured by the continuous withdrawal of arterial blood and the microsphere model as follows: F=Cb(t)/integral Ca(t)*N, where F is rCBF (ml/100g/min), Cb(t) is the brain activity concentration. The integral Ca(t) is the total activity of arterial whole-blood withdrawn, and N is the fraction of the integral Ca(t) that is true tracer activity. We analyzedmore » 14 patients. A dose of 222 MBq of IMP was injected i.v. over 1 min, and withdrawal of the arterial blood was performed from 0 to 5 min (integral Ca(t)), after which arterial blood samples (one point Ca(t)) were obtained at 5, 6, 7, 8, 9, 10 min, respectively. Then the integral Ca(t) was mathematically inferred from the value of one point Ca(t). When we examined the correlation between integral Ca(t)*N and one point Ca(t), and % error of one point Ca(t) compared with integral Ca(t)*N, the minimum of the % error was 8.1% and the maximum of the correlation coefficient was 0.943, the both values of which were obtained at 6 min. We concluded that 6 min was the best time to take arterial blood sample by one point sampling method for assuming the integral Ca(t)*N. IMP SPECT studies were performed with a ring-type SPECT scanner, Compared with rCBF measured by Xe-133 method, a significant correlation was observed in this method (r=0.773). One point Ca(t) method is very easy and quickly for measurement of rCBF without inserting catheters and without arterial blood treatment with octanol.« less

  8. Simple equation for calculation of plasma clearance for evaluation of renal function without urine collection in rats.

    PubMed

    Liu, Xiang; Peng, Dejun; Tian, Hao; Lu, Chengyu

    2017-01-01

    To develop an equation for the evaluation of renal function in rats using three dilutions of plasma samples and to validate this method by comparison with a reference method. The investigation was conducted in Sprague-Dawley (SD) rats after delivery of three doses of iohexol, with blood samples collected before and after dosage using a quantitative blood collection method. Plasma iohexol concentrations were detected by high performance liquid chromatography (HPLC). The extraction recovery of iohexol from plasma was >97.30% and the calibration curve was linear (r 2  = 0.9997) over iohexol concentrations ranging from 10 to 1000 µg/mL. The method had an RE of <9.310 and intra- and inter-day RSD of <5.137% and <3.693%, respectively. The plasma clearance values obtained from the equation correlated closely (r = 0.763) with those obtained using the reference method. The relatively correlation in the results obtained using the method under investigation and the reference method indicate that this new equation can be used for preliminary assessment of renal function in rats. © 2016 Asian Pacific Society of Nephrology.

  9. Fish fins as non-lethal surrogates for muscle tissues in freshwater food web studies using stable isotopes.

    PubMed

    Hette Tronquart, Nicolas; Mazeas, Laurent; Reuilly-Manenti, Liana; Zahm, Amandine; Belliard, Jérôme

    2012-07-30

    Dorsal white muscle is the standard tissue analysed in fish trophic studies using stable isotope analyses. However, sampling white muscle often implies the sacrifice of fish. Thus, we examined whether the non-lethal sampling of fin tissue can substitute muscle sampling in food web studies. Analysing muscle and fin δ(15)N and δ(13)C values of 466 European freshwater fish (14 species) with an elemental analyser coupled with an isotope ratio mass spectrometer, we compared the isotope values of the two tissues. Correlations between fin and muscle isotope ratios were examined for all fish together and specifically for 12 species. We further proposed four methods of assessing muscle from fin isotope ratios and estimated the errors made using these muscle surrogates. Despite significant differences between isotope values of the two tissues, fin and muscle isotopic signals are strongly correlated. Muscle values, estimated with raw fin isotope ratios (1st method), induce an error of ca. 1‰ for both isotopes. In comparison, specific (2nd method) or general (3rd method) correlations provide meaningful corrections of fin isotope ratios (errors <0.6‰). On the other hand, relationships, established for Australian tropical fish, only give poor muscle estimates (errors >0.8‰). There is little chance that a global model can be created. However, the 2nd and 3rd methods of estimating muscle values from fin isotope ratios should provide an acceptable level of error for the studies of European freshwater food web. We thus recommend that future studies use fin tissue as a non-lethal surrogate for muscle. Copyright © 2012 John Wiley & Sons, Ltd.

  10. High 5-hydroxymethylfurfural concentrations are found in Malaysian honey samples stored for more than one year.

    PubMed

    Khalil, M I; Sulaiman, S A; Gan, S H

    2010-01-01

    5-Hydroxymethylfurfural (HMF) content is an indicator of the purity of honey. High concentrations of HMF in honey indicate overheating, poor storage conditions and old honey. This study investigated the HMF content of nine Malaysian honey samples, as well as the correlation of HMF formation with physicochemical properties of honey. Based on the recommendation by the International Honey Commission, three methods for the determination of HMF were used: (1) high performance liquid chromatography (HPLC), (2) White spectrophotometry and (3) Winkler spectrophotometry methods. HPLC and White spectrophotometric results yielded almost similar values, whereas the Winkler method showed higher readings. The physicochemical properties of honey (pH, free acids, lactones and total acids) showed significant correlation with HMF content and may provide parameters that could be used to make quick assessments of honey quality. The HMF content of fresh Malaysian honey samples stored for 3-6 months (at 2.80-24.87 mg/kg) was within the internationally recommended value (80 mg/kg for tropical honeys), while honey samples stored for longer periods (12-24 months) contained much higher HMF concentrations (128.19-1131.76 mg/kg). Therefore, it is recommended that honey should generally be consumed within one year, regardless of the type. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  11. Comparing two fish sampling standards over time: largely congruent results but with caveats

    USGS Publications Warehouse

    Yule, Daniel L.; Evrard, Lori M.; Cachera, Sébastien; Colon, Michel; Guillard, Jean

    2013-01-01

    1. We sampled Lake Bourget (surface area = 44 km2) using CEN standard gillnet and provisional standard acoustic survey methods over 3 years (2005, 2010 and 2011) as the fish community responded to re-oligotrophication. A total of 16 species were caught in benthic gillnets and three species in pelagic gillnets. 2. Lake Bourget results were consistent with a recent study (Emmrich et al., Freshwater Biology, 57, 2012, 2436) showing strong correspondence between average biomass-per-unit-effort (BPUE) in standard benthic gillnets and average acoustic volume backscattering when smaller lakes (0.25–5.45 km2) were treated as sample units. 3. The BPUE of whitefish (Coregonus lavaretus), perch (Perca fluviatilis) and roach (Rutilus rutilus) measured by benthic gillnets all declined significantly with increasing bathymetric depth; 93% of nets set at depths >50 m caught zero fish. 4. Pelagic gillnetting indicated that small (20 m) increased significantly after 2005. 5. Both surveys showed whitefish biomass increased significantly during the study, but whitefish ≥25 cm were poorly represented in benthic gillnet catches. Contrary to the acoustic findings, the BPUE of perch and roach in benthic gillnets did not vary significantly over time. This metric is insensitive to changes in size structure in that a high catch of small fish and a low catch of large fish in different years can provide similar average BPUE estimates. 6. We examined correlations between BPUE in benthic gillnets and acoustic methods at fine spatial scales by averaging acoustic backscattering measurements encompassed by buffers of varying size (250–2000 m) around individual gillnets and by averaging samples collected from lake quadrants. Correlations at fines scales were generally poor, and only in 1 year was the quadrant correlation significant. The lack of correlation can be explained, in part, by the two gears sampling different components of the fish community. Conversely, in pelagic habitat, where the fish community was simpler, we found BPUE in pelagic nets to be strongly correlated with acoustic backscattering. 7. With respect to large lakes like Lake Bourget, we hypothesise that the congruence in average biomass measurements provided by these two survey methods occurs because these different community components are responding similarly to a common driver like lake trophic status (or possibly multiple drivers operating in synergy).

  12. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Improvements to sample processing and measurement to enable more widespread environmental application of tritium.

    PubMed

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.

  14. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    DOE PAGES

    Moran, James; Alexander, Thomas; Aalseth, Craig; ...

    2017-01-26

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. Here, we present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We also identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133 Bq of total T activity. Furthermore, this enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps inmore » our understanding of both natural and artificial T behavior in the environment.« less

  15. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  16. A novel PFIB sample preparation protocol for correlative 3D X-ray CNT and FIB-TOF-SIMS tomography.

    PubMed

    Priebe, Agnieszka; Audoit, Guillaume; Barnes, Jean-Paul

    2017-02-01

    We present a novel sample preparation method that allows correlative 3D X-ray Computed Nano-Tomography (CNT) and Focused Ion Beam Time-Of-Flight Secondary Ion Mass Spectrometry (FIB-TOF-SIMS) tomography to be performed on the same sample. In addition, our invention ensures that samples stay unmodified structurally and chemically between the subsequent experiments. The main principle is based on modifying the topography of the X-ray CNT experimental setup before FIB-TOF-SIMS measurements by incorporating a square washer around the sample. This affects the distribution of extraction field lines and therefore influences the trajectories of secondary ions that are now guided more efficiently towards the detector. As the result, secondary ion detection is significantly improved and higher, i.e. statistically better, signals are obtained. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Occurrence of Legionella in wastewater treatment plants linked to wastewater characteristics.

    PubMed

    Caicedo, C; Beutel, S; Scheper, T; Rosenwinkel, K H; Nogueira, R

    2016-08-01

    In recent years, the occurrence of Legionella in wastewater treatment plants (WWTP) has often been reported. However, until now there is limited knowledge about the factors that promote Legionella's growth in such systems. The aim of this study was to investigate the chemical wastewater parameters that might be correlated to the concentration of Legionella spp. in WWTP receiving industrial effluents. For this purpose, samples were collected at different processes in three WWTP. In 100 % of the samples taken from the activated sludge tanks Legionella spp. were detected at varying concentrations (4.8 to 5.6 log GU/mL) by the quantitative real-time polymerase chain reaction method, but not by the culture method. Statistical analysis with various parameters yielded positive correlations of Legionella spp. concentration with particulate chemical oxygen demand, Kjeldahl nitrogen and protein concentration. Amino acids were quantified in wastewater and activated sludge samples at concentrations that may not support the growth of Legionella, suggesting that in activated sludge tanks this bacterium multiplied in protozoan hosts.

  18. Measurement of rivaroxaban and apixaban in serum samples of patients

    PubMed Central

    Harenberg, Job; Krämer, Sandra; Du, Shanshan; Zolfaghari, Shabnam; Schulze, Astrid; Krämer, Roland; Weiss, Christel; Wehling, Martin; Lip, Gregory Y H

    2014-01-01

    Background The determination of rivaroxaban and apixaban from serum samples of patients may be beneficial in specific clinical situations when additional blood sampling for plasma and thus the determination of factor Xa activity is not feasible or results are not plausible. Materials and methods The primary aim of this study was to compare the concentrations of rivaroxaban and apixaban in serum with those measured in plasma. Secondary aims were the performance of three different chromogenic methods and concentrations in patients on treatment with rivaroxaban 10 mg od (n = 124) or 20 mg od (n = 94) or apixaban 5 mg bid (n = 52) measured at different time. Results Concentrations of rivaroxaban and apixaban in serum were about 20–25% higher compared with plasma samples with a high correlation (r = 0·79775–0·94662) using all assays (all P < 0·0001). The intraclass correlation coefficients were about 0·90 for rivaroxaban and 0·55 for apixaban. Mean rivaroxaban concentrations were higher at 2 and 3 h compared with 1 and 12 h after administration measured from plasma and serum samples (all P-values < 0·05) and were not different between 1 vs. 12 h (plasma and serum). Conclusions The results indicate that rivaroxaban and apixaban concentrations can be determined specifically from serum samples. PMID:24931429

  19. The Correlation between Organizational Commitment and Occupational Burnout among the Physical Education Teachers: The Mediating Role of Self-Efficacy

    ERIC Educational Resources Information Center

    Yildirim, Irfan

    2015-01-01

    The aim of the current study was to examine the correlation between organizational commitment and occupational burnout among the physical education teachers and to determine the mediating role of their self-efficacy perceptions in this relational status. This was a relational study and conducted with cross-sectional method. Sample group was…

  20. Childhood Physical and Sexual Abuse: Prevalence and Correlates among Adolescents Living in Rural Taiwan

    ERIC Educational Resources Information Center

    Yen, Cheng-Fang; Yang, Mei-Sang; Yang, Ming-Jen; Su, Yi-Ching; Wang, Mei-Hua; Lan, Chu-Mei

    2008-01-01

    Objective: The aims of this cross-sectional survey study were to examine the prevalence and correlates of childhood physical and sexual abuse in adolescents living in the rural areas of Taiwan. Method: A sample of indigenous (n = 756) and non-indigenous (n = 928) adolescents was randomly selected from junior high schools in the rural areas of…

  1. Correlates to Human Papillomavirus Vaccination Status and Willingness to Vaccinate in Low-Income Philadelphia High School Students

    ERIC Educational Resources Information Center

    Bass, Sarah B.; Leader, Amy; Shwarz, Michelle; Greener, Judith; Patterson, Freda

    2015-01-01

    Background: Little is known about the correlates of human papillomavirus (HPV) vaccination or willingness to be vaccinated in urban, minority adolescents. Methods: Using responses to the 2013 Youth Risk Behavior Survey in Philadelphia, a random sample of high schools provided weighted data representing 20,941 9th to 12th graders. Stratified by…

  2. Principal Selection: A National Study of Selection Criteria and Procedures

    ERIC Educational Resources Information Center

    Palmer, Brandon

    2017-01-01

    Despite empirical evidence correlating the role of the principal with student achievement, researchers have seldom scrutinized principal selection methods over the past 60 years. This mixed methods study investigated the processes by which school principals are selected. A national sample of top-level school district administrators was used to…

  3. Principal Selection: A National Study of Selection Criteria and Procedures

    ERIC Educational Resources Information Center

    Palmer, Brandon

    2016-01-01

    Despite empirical evidence correlating the role of the principal with student achievement, researchers have seldom scrutinized principal selection methods over the past 60 years. This mixed methods study investigated the processes by which school principals are selected. A national sample of top-level school district administrators was used to…

  4. The High School & Beyond Data Set: Academic Self-Concept Measures.

    ERIC Educational Resources Information Center

    Strein, William

    A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…

  5. Development of an Analytical Method for the Determination of Amoxicillin in Commercial Drugs and Wastewater Samples, and Assessing its Stability in Simulated Gastric Digestion.

    PubMed

    Unutkan, Tugçe; Bakirdere, Sezgin; Keyf, Seyfullah

    2018-01-01

    A highly sensitive analytical HPLC-UV method was developed for the determination of amoxicillin in drugs and wastewater samples at a single wavelength (230 nm). In order to substantially predict the in vivo behavior of amoxicillin, drug samples were subjected to simulated gastric conditions. The calibration plot of the method was linear from 0.050 to 500 mg L-1 with a correlation coefficient of 0.9999. The limit of detection and limit of quantitation were found to be 16 and 54 μg L-1, respectively. The percentage recovery of amoxicillin in wastewater was found to be 97.0 ± 1.6%. The method was successfully applied for the qualitative and quantitative determination of amoxicillin in drug samples including tablets and suspensions. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. [The effect of an exercise program to strengthen pelvic floor muscles in multiparous women].

    PubMed

    Assis, Thaís Rocha; Sá, Ana Claudia Antonio Maranhão; Amaral, Waldemar Naves do; Batista, Elicéia Marcia; Formiga, Cibelle Kayenne Martins Roberto; Conde, Délio Marques

    2013-01-01

    To investigate the effect of an individualized and supervised exercise program for the pelvic floor muscles (PFM) in the postpartum period of multiparous women, and to verify the correlation between two methods used to assess PFM strength. An open clinical trial was performed with puerperal, multiparous women aged 18 to 35 years. The sample consisted of 23 puerperal women divided into two groups: Intervention Group (IG, n=11) and Control Group (CG, n=12). The puerperal women in IG participated in an eight-week PFM exercise program, twice a week. The puerperal women in CG did not receive any recommendations regarding exercise. PFM strength was assessed using digital vaginal palpation and a perineometer. The statistical analysis was performed using the following tests: Fisher's exact, χ(2), Student's t, Kolmogorov-Smirnov for two samples, and Pearson's correlation coefficient. Significance was defined as p<0.05. The participants' mean age was 24 ± 4.5 years in IG and 25.3 ± 4 years in CG (p=0.4). After the exercise program, a significant difference was found between the groups in both modalities of muscle strength assessment (p<0.001). The two muscle strength assessment methods showed a significant correlation in both assessments (1(st) assessment: r=0.889, p<0.001; 2(nd) assessment: r=0.925, p<0.001). The exercise program promoted a significant improvement in PFM strength. Good correlation was observed between digital vaginal palpation and a perineometer, which indicates that vaginal palpation can be used in clinical practice, since it is an inexpensive method that demonstrated significant correlation with an objective method, i.e. the use of a perioneometer.

  7. Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.

    PubMed

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2004-05-01

    Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.

  8. Identifying microRNA/mRNA dysregulations in ovarian cancer

    PubMed Central

    2012-01-01

    Background MicroRNAs are a class of noncoding RNA molecules that co-regulate the expression of multiple genes via mRNA transcript degradation or translation inhibition. Since they often target entire pathways, they may be better drug targets than genes or proteins. MicroRNAs are known to be dysregulated in many tumours and associated with aggressive or poor prognosis phenotypes. Since they regulate mRNA in a tissue specific manner, their functional mRNA targets are poorly understood. In previous work, we developed a method to identify direct mRNA targets of microRNA using patient matched microRNA/mRNA expression data using an anti-correlation signature. This method, applied to clear cell Renal Cell Carcinoma (ccRCC), revealed many new regulatory pathways compromised in ccRCC. In the present paper, we apply this method to identify dysregulated microRNA/mRNA mechanisms in ovarian cancer using data from The Cancer Genome Atlas (TCGA). Methods TCGA Microarray data was normalized and samples whose class labels (tumour or normal) were ambiguous with respect to consensus ensemble K-Means clustering were removed. Significantly anti-correlated and correlated genes/microRNA differentially expressed between tumour and normal samples were identified. TargetScan was used to identify gene targets of microRNA. Results We identified novel microRNA/mRNA mechanisms in ovarian cancer. For example, the expression level of RAD51AP1 was found to be strongly anti-correlated with the expression of hsa-miR-140-3p, which was significantly down-regulated in the tumour samples. The anti-correlation signature was present separately in the tumour and normal samples, suggesting a direct causal dysregulation of RAD51AP1 by hsa-miR-140-3p in the ovary. Other pairs of potentially biological relevance include: hsa-miR-145/E2F3, hsa-miR-139-5p/TOP2A, and hsa-miR-133a/GCLC. We also identified sets of positively correlated microRNA/mRNA pairs that are most likely result from indirect regulatory mechanisms. Conclusions Our findings identify novel microRNA/mRNA relationships that can be verified experimentally. We identify both generic microRNA/mRNA regulation mechanisms in the ovary as well as specific microRNA/mRNA controls which are turned on or off in ovarian tumours. Our results suggest that the disease process uses specific mechanisms which may be significant for their utility as early detection biomarkers or in the development of microRNA therapies in treating ovarian cancers. The positively correlated microRNA/mRNA pairs suggest the existence of novel regulatory mechanisms that proceed via intermediate states (indirect regulation) in ovarian tumorigenesis. PMID:22452920

  9. Neuroanatomical correlates of time perspective: A voxel-based morphometry study.

    PubMed

    Chen, Zhiyi; Guo, Yiqun; Feng, Tingyong

    2018-02-26

    Previous studies indicated that time perspective can affect many behaviors, such as decisions, risk taking, substance abuse and health behaviors. However, very little is known about the neural substrates of time perspective (TP). To address this question, we characterized different dimensions of TP (including the Past, Present, and Future TP) using standardized Zimbardo Time Perspective Inventory (ZTPI), and quantified the gray matter volume using voxel-based morphometry (VBM) method across two independent samples. Our whole-brain analysis (sample 1, N=150) revealed Past-Negative TP was positively correlated with the GMV of a cluster in LPFC whereas Past-Positive was negatively correlated with the GMV in OFC, and Future TP was negatively correlated with GMV in mPFC. Moreover, two present scales (Present-Hedonistic and Present-Fatalistic TPs) were positively correlated with the GMV of regions in MTG and precuneus, respectively. We further examined the reliability of these correlations between multidimensional TPs and neuroanatomical structures in another independent sample (sample 2, N=58). Results verified our findings that GMV in LPFC could predict Past-Negative TP while GMV in OFC could predict Past-Positive TP, and the GMV in MTG could predict Present-Hedonistic while the GMV in presuneus could predict Present-Fatalistic, as well as the GMV in mPFC could predict Future TP. Thus, our findings suggest that the existence of selective neural basis underlying TPs, and further provide the stable biomarkers for multidimensional TPs. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Comparison of Fecal Collection Methods for Microbiota Studies in Bangladesh

    PubMed Central

    Chen, Jun; Kibriya, Muhammad G.; Chen, Yu; Islam, Tariqul; Eunes, Mahbubul; Ahmed, Alauddin; Naher, Jabun; Rahman, Anisur; Amir, Amnon; Shi, Jianxin; Abnet, Christian C.; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Ahsan, Habibul; Sinha, Rashmi

    2017-01-01

    ABSTRACT To our knowledge, fecal microbiota collection methods have not been evaluated in low- and middle-income countries. Therefore, we evaluated five different fecal sample collection methods for technical reproducibility, stability, and accuracy within the Health Effects of Arsenic Longitudinal Study (HEALS) in Bangladesh. Fifty participants from the HEALS provided fecal samples in the clinic which were aliquoted into no solution, 95% ethanol, RNAlater, postdevelopment fecal occult blood test (FOBT) cards, and fecal immunochemical test (FIT) tubes. Half of the aliquots were frozen immediately at −80°C (day 0) and the remaining samples were left at ambient temperature for 96 h and then frozen (day 4). Intraclass correlation coefficients (ICC) were calculated for the relative abundances of the top three phyla, for two alpha diversity measures, and for four beta diversity measures. The duplicate samples had relatively high ICCs for technical reproducibility at day 0 and day 4 (range, 0.79 to 0.99). The FOBT card and samples preserved in RNAlater and 95% ethanol had the highest ICCs for stability over 4 days. The FIT tube had lower stability measures overall. In comparison to the “gold standard” method using immediately frozen fecal samples with no solution, the ICCs for many of the microbial metrics were low, but the rank order appeared to be preserved as seen by the Spearman correlation. The FOBT cards, 95% ethanol, and RNAlater were effective fecal preservatives. These fecal collection methods are optimal for future cohort studies, particularly in low- and middle-income countries. IMPORTANCE The collection of fecal samples in prospective cohort studies is essential to provide the opportunity to study the effect of the human microbiota on numerous health conditions. However, these collection methods have not been adequately tested in low- and middle-income countries. We present estimates of technical reproducibility, stability at ambient temperature for 4 days, and accuracy comparing a “gold standard” for fecal samples in no solution, 95% ethanol, RNAlater, postdevelopment fecal occult blood test cards, and fecal immunochemical test tubes in a study conducted in Bangladesh. Fecal occult blood test cards and fecal samples stored in 95% ethanol or RNAlater adequately preserve fecal samples in this setting. Therefore, new studies in low- and middle-income countries should include collection of fecal samples using fecal occult blood test cards, 95% ethanol, or RNAlater for prospective cohort studies. PMID:28258145

  11. Determination of Dornic acidity as a method to select donor milk in a milk bank.

    PubMed

    Vázquez-Román, Sara; Garcia-Lara, Nadia Raquel; Escuder-Vieco, Diana; Chaves-Sánchez, Fernando; De la Cruz-Bertolo, Javier; Pallas-Alonso, Carmen Rosa

    2013-02-01

    Dornic acidity may be an indirect measurement of milk's bacteria content and its quality. There are no uniform criteria among different human milk banks on milk acceptance criteria. The main aim of this study is to report the correlation between Dornic acidity and bacterial growth in donor milk in order to validate the Dornic acidity value as an adequate method to select milk prior to its pasteurization. From 105 pools, 4-mL samples of human milk were collected. Dornic acidity measurement and culture in blood and McConkey's agar cultures were performed. Based on Dornic acidity degrees, we classified milk into three quality categories: top quality (acidity <4°D), intermediate (acidity between 4°D and 7°D), and milk unsuitable to be consumed (acidity ≥ 8°D). Spearman's correlation coefficient was used to perform statistical analysis. Seventy percent of the samples had Dornic acidity under 4°D, and 88% had a value under 8°D. A weak positive correlation was observed between the bacterial growth in milk and Dornic acidity. The overall discrimination performance of Dornic acidity was higher for predicting growth of Gram-negative organisms. In milk with Dornic acidity of ≥ 4°D, such a measurement has a sensitivity of 100% for detecting all the samples with bacterial growth with Gram-negative bacteria of over 10(5) colony-forming units/mL. The correlation between Dornic acidity and bacterial growth in donor milk is weak but positive. The measurement of Dornic acidity could be considered as a simple and economical method to select milk to pasteurize in a human milk bank based in quality and safety criteria.

  12. Woodstove smoke and CO emissions: comparison of reference methods with the VIP sampler.

    PubMed

    Jaasma, D R; Champion, M C; Shelton, J W

    1990-06-01

    A new field sampler has been developed for measuring the particulate matter (PM) and carbon monoxide emissions of woodburning stoves. Particulate matter is determined by carbon balance and the workup of a sample train which is similar to a room-temperature EPA Method 5G train. A steel tank, initially evacuated, serves as the motive force for sampling and also accumulates a gas sample for post-test analysis of time-averaged stack CO and CO2 concentrations. Workup procedures can be completed within 72 hours of sampler retrieval. The system has been compared to reference methods in two laboratory test series involving six different woodburning appliances and two independent laboratories. The correlation of field sampler emission rates and reference method rates is strong.

  13. Woodstove smoke and CO emissions: Comparison of reference methods with the VIP sampler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaasma, D.R.; Champion, M.C.; Shelton, J.W.

    1990-06-01

    A new field sampler has been developed for measuring the particulate matter (PM) and carbon monoxide emissions of woodburning stoves. Particulate matter is determined by carbon balance and the workup of a sample train which is similar to a room-temperature EPA Method 5G train. A steel tank, initially evacuated, serves as the motive force for sampling and also accumulates a gas sample for post-test analysis of time-averaged stack CO and CO{sub 2} concentrations. Workup procedures can be completed within 72 hours of sampler retrieval. The system has been compared to reference methods in two laboratory test series involving six differentmore » woodburning appliances and two independent laboratories. The correlation of field sampler emission rates and reference method rates is strong.« less

  14. Accurate mask-based spatially regularized correlation filter for visual tracking

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodong; Xu, Xinping

    2017-01-01

    Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.

  15. Electrical and magnetic properties of rock and soil

    USGS Publications Warehouse

    Scott, J.H.

    1983-01-01

    Field and laboratory measurements have been made to determine the electrical conductivity, dielectric constant, and magnetic permeability of rock and soil in areas of interest in studies of electromagnetic pulse propagation. Conductivity is determined by making field measurements of apparent resisitivity at very low frequencies (0-20 cps), and interpreting the true resistivity of layers at various depths by curve-matching methods. Interpreted resistivity values are converted to corresponding conductivity values which are assumed to be applicable at 10^2 cps, an assumption which is considered valid because the conductivity of rock and soil is nearly constant at frequencies below 10^2 cps. Conductivity is estimated at higher frequencies (up to 10^6 cps) by using statistical correlations of three parameters obtained from laboratory measurements of rock and soil samples: conductivity at 10^2 cps, frequency and conductivity measured over the range 10^2 to 10^6 cps. Conductivity may also be estimated in this frequency range by using field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and conductivity measured over the range 10^2 to 10^6 cps. This method is less accurate because nonrandom variation of ion concentration in natural pore water introduces error. Dielectric constant is estimated in a similar manner from field-derived conductivity values applicable at 10^2 cps and statistical correlations of three parameters obtained from laboratory measurements of samples: conductivity measured at 10^2 cps, frequency, and dielectric constant measured over the frequency range 10^2 to 10^6 cps. Dielectric constant may also be estimated from field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and dielectric constant measured from 10^2 to 10^6 cps, but again, this method is less accurate because of variation of ion concentration of pore water. Special laboratory procedures are used to measure conductivity and dielectric constant of rock and soil samples. Electrode polarization errors are minimized by using an electrode system that is electrochemically reversible-with ions in pore water.

  16. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  17. Phase quantification by X-ray photoemission valence band analysis applied to mixed phase TiO2 powders

    NASA Astrophysics Data System (ADS)

    Breeson, Andrew C.; Sankar, Gopinathan; Goh, Gregory K. L.; Palgrave, Robert G.

    2017-11-01

    A method of quantitative phase analysis using valence band X-ray photoelectron spectra is presented and applied to the analysis of TiO2 anatase-rutile mixtures. The valence band spectra of pure TiO2 polymorphs were measured, and these spectral shapes used to fit valence band spectra from mixed phase samples. Given the surface sensitive nature of the technique, this yields a surface phase fraction. Mixed phase samples were prepared from high and low surface area anatase and rutile powders. In the samples studied here, the surface phase fraction of anatase was found to be linearly correlated with photocatalytic activity of the mixed phase samples, even for samples with very different anatase and rutile surface areas. We apply this method to determine the surface phase fraction of P25 powder. This method may be applied to other systems where a surface phase fraction is an important characteristic.

  18. Headspace solid-phase microextraction (HS-SPME) and liquid-liquid extraction (LLE): comparison of the performance in classification of ecstasy tablets. Part 2.

    PubMed

    Bonadio, Federica; Margot, Pierre; Delémont, Olivier; Esseiva, Pierre

    2008-11-20

    Headspace solid-phase microextraction (HS-SPME) is assessed as an alternative to liquid-liquid extraction (LLE) currently used for 3,4-methylenedioxymethampethamine (MDMA) profiling. Both methods were compared evaluating their performance in discriminating and classifying samples. For this purpose 62 different seizures were analysed using both extraction techniques followed by gas chromatography-mass spectroscopy (GC-MS). A previously validated method provided data for HS-SPME, whereas LLE data were collected applying a harmonized methodology developed and used in the European project CHAMP. After suitable pre-treatment, similarities between sample pairs were studied using the Pearson correlation. Both methods enable to distinguish between samples coming from the same pre-tabletting batches and samples coming from different pre-tabletting batches. This finding emphasizes the use of HS-SPME as an effective alternative to LLE, with additional advantages such as sample preparation and a solvent-free process.

  19. Variations in the detection of ZAP-70 in chronic lymphocytic leukemia: Comparison with IgV(H) mutation analysis.

    PubMed

    Sheikholeslami, M R; Jilani, I; Keating, M; Uyeji, J; Chen, K; Kantarjian, H; O'Brien, S; Giles, F; Albitar, M

    2006-07-15

    Lack of immunoglobulin heavy chain genes (IgV(H)) mutation in patients with chronic lymphocytic leukemia (CLL) is associated with rapid disease progression and shorter survival. The zeta-chain (T-cell receptor) associated protein kinase 70 kDa (ZAP-70) has been reported to be a surrogate marker for IgV(H) mutation status, and its expression in leukemic cells correlates with unmutated IgV(H). However, ZAP-70 detection by flow cytometry varies significantly dependant on the antibodies used, the method of performing the assay, and the condition of the cells in the specimen. The clinical value of ZAP-70 testing when samples are shipped under poorly controlled conditions is not known. Furthermore, testing in a research environment may differ from testing in a routine clinical laboratory. We validated an assay for ZAP-70 by comparing results with clinical outcome and the mutation status of the IgV(H). Using stored samples, we show significant correlation between ZAP-70 expression and clinical outcome as well as IgV(H) mutation at a cut-off point of 15%. While positive samples (>15% positivity) remain positive when kept in the laboratory environment for 48 h after initial testing, results obtained from samples from CLL patients tested after shipping at room temperature for routine testing showed no correlation with IgV(H) mutation status when 15% cut-off was used. In these samples, cut-point of 10% correlated with the IgV(H) mutation (P = 0.0001). This data suggests that although ZAP-70 positivity correlates with IgV(H) mutation status and survival, variations in sample handling and preparation may influence results. We show that IgV(H) mutation results, unlike ZAP-70 remain correlated with CD38 expression and beta-2 microglobulin in shipped samples, and ZAP-70 testing should not be used as the sole criterion for stratifying patients for therapy. (c) 2006 International Society for Analytical Cytology.

  20. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  1. Determination of Efavirenz in Human Dried Blood Spots by Reversed-Phase High Performance Liquid Chromatography with UV Detection

    PubMed Central

    Hoffman, Justin T; Rossi, Steven S; Espina-Quinto, Rowena; Letendre, Scott; Capparelli, Edmund V

    2013-01-01

    Background Previously published methods for determination of efavirenz (EFV) in human dried blood spots (DBS) employ costly and complex liquid chromatography/mass spectrometry. We describe the validation and evaluation of a simple and inexpensive high-performance liquid chromatography (HPLC) method for EFV quantification in human DBS and dried plasma spots (DPS), using ultraviolet (UV) detection appropriate for resource-limited settings. Methods 100μl of heparinized whole blood or plasma were spotted onto blood collection cards, dried, punched, and eluted. Eluates are injected onto a C-18 reversed phase HPLC column. EFV is separated isocratically using a potassium phosphate and ACN mobile phase. UV detection is at 245nm. Quantitation is by use of external calibration standards. Following validation, the method was evaluated using whole blood and plasma from HIV-positive patients undergoing EFV therapy. Results Mean recovery of drug from dried blood spots is 91.5%. The method is linear over the validated concentration range of 0.3125 – 20.0μg/mL. A good correlation (Spearman r=0.96) between paired plasma and DBS EFV concentrations from the clinical samples was observed, and hematocrit level was not found to be a significant determinant of the EFV DBS level. The mean observed CDBS/Cplasma ratio was 0.68. A good correlation (Spearman r=0.96) between paired plasma and DPS EFV concentrations from the clinical samples was observed. The mean percent deviation of DPS samples from plasma samples is 1.68%. Conclusions Dried whole blood spot or dried plasma spot sampling is well suited for monitoring EFV therapy in resource limited settings, particularly when high sensitivity is not essential. PMID:23503446

  2. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  3. Robust spectral-domain optical coherence tomography speckle model and its cross-correlation coefficient analysis

    PubMed Central

    Liu, Xuan; Ramella-Roman, Jessica C.; Huang, Yong; Guo, Yuan; Kang, Jin U.

    2013-01-01

    In this study, we proposed a generic speckle simulation for optical coherence tomography (OCT) signal, by convolving the point spread function (PSF) of the OCT system with the numerically synthesized random sample field. We validated our model and used the simulation method to study the statistical properties of cross-correlation coefficients (XCC) between Ascans which have been recently applied in transverse motion analysis by our group. The results of simulation show that over sampling is essential for accurate motion tracking; exponential decay of OCT signal leads to an under estimate of motion which can be corrected; lateral heterogeneity of sample leads to an over estimate of motion for a few pixels corresponding to the structural boundary. PMID:23456001

  4. Impact of non-uniform correlation structure on sample size and power in multiple-period cluster randomised trials.

    PubMed

    Kasza, J; Hemming, K; Hooper, R; Matthews, Jns; Forbes, A B

    2017-01-01

    Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.

  5. Sports Participation and Positive Correlates in African American, Latino, and White Girls

    PubMed Central

    Duncan, Susan C.; Strycker, Lisa A.; Chaumeton, Nigel R.

    2015-01-01

    Purpose The purpose of the study was to examine relations among sports participation and positive correlates across African American, Latino, and white girls. Positive correlate variables were self-perceptions (self-worth, body attractiveness, athletic competence), less depression, and participation in extracurricular activities. Methods The sample comprised 372 girls (mean age = 12.03 years). Data were analyzed using multiple-sample structural equation models, controlling for age and income. Results Across all ethnic groups, greater sports participation was significantly related to higher self-worth, body attractiveness, and athletic competence, and to more extracurricular activity. Among Latino and white girls only, greater sports participation also was related to less depression. There were significant age and income influences on the positive correlates. Conclusions Findings confirm the existence of significant relationships between organized sports participation and positive correlates among early adolescent African American, Latino, and white girls. Despite a few ethnic differences in relationships, the current study revealed more similarities than differences. PMID:26692758

  6. Impact of distal mutations on the network of coupled motions correlated to hydride transfer in dihydrofolate reductase.

    PubMed

    Wong, Kim F; Selzer, Tzvia; Benkovic, Stephen J; Hammes-Schiffer, Sharon

    2005-05-10

    A comprehensive analysis of the network of coupled motions correlated to hydride transfer in dihydrofolate reductase is presented. Hybrid quantum/classical molecular dynamics simulations are combined with a rank correlation analysis method to extract thermally averaged properties that vary along the collective reaction coordinate according to a prescribed target model. Coupled motions correlated to hydride transfer are identified throughout the enzyme. Calculations for wild-type dihydrofolate reductase and a triple mutant, along with the associated single and double mutants, indicate that each enzyme system samples a unique distribution of coupled motions correlated to hydride transfer. These coupled motions provide an explanation for the experimentally measured nonadditivity effects in the hydride transfer rates for these mutants. This analysis illustrates that mutations distal to the active site can introduce nonlocal structural perturbations and significantly impact the catalytic rate by altering the conformational motions of the entire enzyme and the probability of sampling conformations conducive to the catalyzed reaction.

  7. Demonstration of correlative atomic force and transmission electron microscopy using actin cytoskeleton

    PubMed Central

    Yamada, Yutaro; Konno, Hiroki; Shimabukuro, Katsuya

    2017-01-01

    In this study, we present a new technique called correlative atomic force and transmission electron microscopy (correlative AFM/TEM) in which a targeted region of a sample can be observed under AFM and TEM. The ultimate goal of developing this new technique is to provide a technical platform to expand the fields of AFM application to complex biological systems such as cell extracts. Recent advances in the time resolution of AFM have enabled detailed observation of the dynamic nature of biomolecules. However, specifying molecular species, by AFM alone, remains a challenge. Here, we demonstrate correlative AFM/TEM, using actin filaments as a test sample, and further show that immuno-electron microscopy (immuno-EM), to specify molecules, can be integrated into this technique. Therefore, it is now possible to specify molecules, captured under AFM, by subsequent observation using immuno-EM. In conclusion, correlative AFM/TEM can be a versatile method to investigate complex biological systems at the molecular level. PMID:28828286

  8. Determination of Oebalus pugnax (Hemiptera: Pentatomidae) spatial pattern in rice and development of visual sampling methods and population sampling plans.

    PubMed

    Espino, L; Way, M O; Wilson, L T

    2008-02-01

    Commercial rice, Oryza sativa L., fields in southeastern Texas were sampled during 2003 and 2004, and visual samples were compared with sweep net samples. Fields were sampled at different stages of panicle development, times of day, and by different operators. Significant differences were found between perimeter and within field sweep net samples, indicating that samples taken 9 m from the field margin overestimate within field Oebalus pugnax (F.) (Hemiptera: Pentatomidae) populations. Time of day did not significantly affect the number of O. pugnax caught with the sweep net; however, there was a trend to capture more insects during morning than afternoon. For all sampling methods evaluated during this study, O. pugnax was found to have an aggregated spatial pattern at most densities. When comparing sweep net with visual sampling methods, one sweep of the "long stick" and two sweeps of the "sweep stick" correlated well with the sweep net (r2 = 0.639 and r2 = 0.815, respectively). This relationship was not affected by time of day of sampling, stage of panicle development, type of planting or operator. Relative cost-reliability, which incorporates probability of adoption, indicates the visual methods are more cost-reliable than the sweep net for sampling O.

  9. School Principals' Leadership Behaviours and Its Relation with Teachers' Sense of Self-Efficacy

    ERIC Educational Resources Information Center

    Mehdinezhad, Vali; Mansouri, Masoumeh

    2016-01-01

    The aim of this study was to investigate the relationship between school principals' leadership behaviours and teachers' sense of self-efficacy. The research method was descriptive and correlational. A sample size of 254 teachers was simply selected randomly by proportional sampling. For data collection, the Teachers' Sense of Efficacy Scale of…

  10. Affective Structures among Students and Its Relationship with Academic Burnout with Emphasis on Gender

    ERIC Educational Resources Information Center

    Bikar, Somaye; Marziyeh, Afsaneh; Pourghaz, Abdolwahab

    2018-01-01

    This study aimed to determine the relationship between affective structures and academic burnout among male and female third-grade high school students in Zahedan in the academic year 2016-2017. The current descriptive-correlational study had a sample including 362 students selected using a multistage cluster sampling method. To collect data,…

  11. Characterization of the Theta to Beta Ratio in ADHD: Identifying Potential Sources of Heterogeneity

    ERIC Educational Resources Information Center

    Loo, Sandra K.; Cho, Alexander; Hale, T. Sigi; McGough, James; McCracken, James; Smalley, Susan L.

    2013-01-01

    Objective: The goal of this study is to characterize the theta to beta ratio (THBR) obtained from electroencephalogram (EEG) measures, in a large sample of community and clinical participants with regard to (a) ADHD diagnosis and subtypes, (b) common psychiatric comorbidities, and (c) cognitive correlates. Method: The sample includes 871…

  12. Miniaturized Sample Preparation and Rapid Detection of Arsenite in Contaminated Soil Using a Smartphone.

    PubMed

    Siddiqui, Mohd Farhan; Kim, Soocheol; Jeon, Hyoil; Kim, Taeho; Joo, Chulmin; Park, Seungkyung

    2018-03-04

    Conventional methods for analyzing heavy metal contamination in soil and water generally require laboratory equipped instruments, complex procedures, skilled personnel and a significant amount of time. With the advancement in computing and multitasking performances, smartphone-based sensors potentially allow the transition of the laboratory-based analytical processes to field applicable, simple methods. In the present work, we demonstrate the novel miniaturized setup for simultaneous sample preparation and smartphone-based optical sensing of arsenic As(III) in the contaminated soil. Colorimetric detection protocol utilizing aptamers, gold nanoparticles and NaCl have been optimized and tested on the PDMS-chip to obtain the high sensitivity with the limit of detection of 0.71 ppm (in the sample) and a correlation coefficient of 0.98. The performance of the device is further demonstrated through the comparative analysis of arsenic-spiked soil samples with standard laboratory method, and a good agreement with a correlation coefficient of 0.9917 and the average difference of 0.37 ppm, are experimentally achieved. With the android application on the device to run the experiment, the whole process from sample preparation to detection is completed within 3 hours without the necessity of skilled personnel. The approximate cost of setup is estimated around 1 USD, weight 55 g. Therefore, the presented method offers the simple, rapid, portable and cost-effective means for onsite sensing of arsenic in soil. Combined with the geometric information inside the smartphones, the system will allow the monitoring of the contamination status of soils in a nation-wide manner.

  13. Vibrational spectroscopic characterization of growth bands in Porites coral from South China Sea

    NASA Astrophysics Data System (ADS)

    Song, Yinxian; Yu, Kefu; Ayoko, Godwin A.; Frost, Ray L.; Shi, Qi; Feng, Yuexing; Zhao, Jianxin

    2013-08-01

    A series of samples from different growth bands of Porites coral skeleton were studied using Raman, infrared reflectance methods. The Raman spectra proved that skeleton samples from different growth bands have the same mineral phase as aragonite, but a band at 133 cm-1 for the top layer shows a transition from ˜120 cm-1 for vaterite to ˜141 cm-1 for aragonite. It is inferred that the vaterite should be the precursor of aragonite of coral skeleton. The positional shift in the infrared spectra of the skeleton samples from growth bands correlate significantly to their minor elements (Li, Mg, Sr, Mn, Fe and U) contents. Mg, Sr and U especially have significant negative correlations with the positions of the antisymmetric stretching band ν3 at ˜1469 cm-1. And Li shows a high negative correlation with ν2 band (˜855 cm-1), while Sr and Mn show similar negative correlation with ν4 band (˜712 cm-1). And Mn also shows a negative correlation with ν1 band (˜1082 cm-1). A significantly negative correlation is observed for U with ν1 + ν4 band (˜1786 cm-1). However, Fe shows positive correlation with ν1, ν2, ν3, ν4 and ν1 + ν4 bands shifts, especially a significant correlation with ν1 band (˜1082 cm-1). New insights into the characteristics of coral at different growth bands of skeleton are given in present work.

  14. Monitoring of an antigen manufacturing process.

    PubMed

    Zavatti, Vanessa; Budman, Hector; Legge, Raymond; Tamer, Melih

    2016-06-01

    Fluorescence spectroscopy in combination with multivariate statistical methods was employed as a tool for monitoring the manufacturing process of pertactin (PRN), one of the virulence factors of Bordetella pertussis utilized in whopping cough vaccines. Fluorophores such as amino acids and co-enzymes were detected throughout the process. The fluorescence data collected at different stages of the fermentation and purification process were treated employing principal component analysis (PCA). Through PCA, it was feasible to identify sources of variability in PRN production. Then, partial least square (PLS) was employed to correlate the fluorescence spectra obtained from pure PRN samples and the final protein content measured by a Kjeldahl test from these samples. In view that a statistically significant correlation was found between fluorescence and PRN levels, this approach could be further used as a method to predict the final protein content.

  15. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    PubMed

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  16. Nondestructive evaluation of hydrogel mechanical properties using ultrasound

    PubMed Central

    Walker, Jason M.; Myers, Ashley M.; Schluchter, Mark D.; Goldberg, Victor M.; Caplan, Arnold I.; Berilla, Jim A.; Mansour, Joseph M.; Welter, Jean F.

    2012-01-01

    The feasibility of using ultrasound technology as a noninvasive, nondestructive method for evaluating the mechanical properties of engineered weight-bearing tissues was evaluated. A fixture was designed to accurately and reproducibly position the ultrasound transducer normal to the test sample surface. Agarose hydrogels were used as phantoms for cartilage to explore the feasibility of establishing correlations between ultrasound measurements and commonly used mechanical tissue assessments. The hydrogels were fabricated in 1–10% concentrations with a 2–10 mm thickness. For each concentration and thickness, six samples were created, for a total of 216 gel samples. Speed of sound was determined from the time difference between peak reflections and the known height of each sample. Modulus was computed from the speed of sound using elastic and poroelastic models. All ultrasonic measurements were made using a 15 MHz ultrasound transducer. The elastic modulus was also determined for each sample from a mechanical unconfined compression test. Analytical comparison and statistical analysis of ultrasound and mechanical testing data was carried out. A correlation between estimates of compressive modulus from ultrasonic and mechanical measurements was found, but the correlation depended on the model used to estimate the modulus from ultrasonic measurements. A stronger correlation with mechanical measurements was found using the poroelastic rather than the elastic model. Results from this preliminary testing will be used to guide further studies of native and engineered cartilage. PMID:21773854

  17. A statistical method to calculate blood contamination in the measurement of salivary hormones in healthy women.

    PubMed

    Behr, Guilherme A; Patel, Jay P; Coote, Marg; Moreira, Jose C F; Gelain, Daniel P; Steiner, Meir; Frey, Benicio N

    2017-05-01

    Previous studies have reported that salivary concentrations of certain hormones correlate with their respective serum levels. However, most of these studies did not control for potential blood contamination in saliva. In the present study we developed a statistical method to test the amount of blood contamination that needs to be avoided in saliva samples for the following hormones: cortisol, estradiol, progesterone, testosterone and oxytocin. Saliva and serum samples were collected from 38 healthy, medication-free women (mean age=33.8±7.3yr.; range=19-45). Serum and salivary hormonal levels and the amount of transferrin in saliva samples were determined using enzyme immunoassays. Salivary transferrin levels did not correlate with salivary cortisol or estradiol (up to 3mg/dl), but they were positively correlated with salivary testosterone, progesterone and oxytocin (p<0.05). After controlling for blood contamination, only cortisol (r=0.65, P<0.001) and progesterone levels (r=0.57, P=0.002) displayed a positive correlation between saliva and serum. Our analyses suggest that transferrin levels higher than 0.80, 0.92 and 0.64mg/dl should be avoided for testosterone, progesterone and oxytocin salivary analyses, respectively. We recommend that salivary transferrin is measured in research involving salivary hormones in order to determine the level of blood contamination that might affect specific hormonal salivary concentrations. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. qpure: A Tool to Estimate Tumor Cellularity from Genome-Wide Single-Nucleotide Polymorphism Profiles

    PubMed Central

    Song, Sarah; Nones, Katia; Miller, David; Harliwong, Ivon; Kassahn, Karin S.; Pinese, Mark; Pajic, Marina; Gill, Anthony J.; Johns, Amber L.; Anderson, Matthew; Holmes, Oliver; Leonard, Conrad; Taylor, Darrin; Wood, Scott; Xu, Qinying; Newell, Felicity; Cowley, Mark J.; Wu, Jianmin; Wilson, Peter; Fink, Lynn; Biankin, Andrew V.; Waddell, Nic; Grimmond, Sean M.; Pearson, John V.

    2012-01-01

    Tumour cellularity, the relative proportion of tumour and normal cells in a sample, affects the sensitivity of mutation detection, copy number analysis, cancer gene expression and methylation profiling. Tumour cellularity is traditionally estimated by pathological review of sectioned specimens; however this method is both subjective and prone to error due to heterogeneity within lesions and cellularity differences between the sample viewed during pathological review and tissue used for research purposes. In this paper we describe a statistical model to estimate tumour cellularity from SNP array profiles of paired tumour and normal samples using shifts in SNP allele frequency at regions of loss of heterozygosity (LOH) in the tumour. We also provide qpure, a software implementation of the method. Our experiments showed that there is a medium correlation 0.42 (-value = 0.0001) between tumor cellularity estimated by qpure and pathology review. Interestingly there is a high correlation 0.87 (-value 2.2e-16) between cellularity estimates by qpure and deep Ion Torrent sequencing of known somatic KRAS mutations; and a weaker correlation 0.32 (-value = 0.004) between IonTorrent sequencing and pathology review. This suggests that qpure may be a more accurate predictor of tumour cellularity than pathology review. qpure can be downloaded from https://sourceforge.net/projects/qpure/. PMID:23049875

  19. Surveying Mercury Levels in Hair, Blood and Urine of under 7-Year Old Children from a Coastal City in China

    PubMed Central

    Chen, Guixia; Chen, Xiaoxin; Yan, Chonghuai; Wu, Xingdong; Zeng, Guozhang

    2014-01-01

    Aim: The average mercury load in children under 7-years old was determined in a populated but not overly industrial coastal area in China. Methods: 395 blood samples, 1072 urine samples, and 581 hair samples were collected from 1076 children, aged 0 to 6 years, from eight representative communities of Xiamen, China. Mercury levels in the samples were surveyed. Results: The 95% upper limits of mercury in blood, urine, and hair for the children were 2.30, 1.50 and 2100.00 μg/kg, respectively. Levels tended to increase with age. Correlation analyses showed that mercury levels in blood and urine correlated with those in hair (n = 132), r = 0.49, p < 0.0001 and r = 0.20, p = 0.0008; however, blood mercury levels did not correlate with urine levels (n = 284), r = 0.07, p = 0.35. Conclusions: Surveying the average mercury load in children 0 to 6 years, and the 95% upper limit value of mercury in their blood, urine, and hair should help guide risk assessment and health management for children. PMID:25419876

  20. Sequential Measurement of Intermodal Variability in Public Transportation PM2.5 and CO Exposure Concentrations.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2016-08-16

    A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.

  1. Use of 16S rRNA sequencing and quantitative PCR to correlate venous leg ulcer bacterial bioburden dynamics with wound expansion, antibiotic therapy, and healing

    PubMed Central

    Sprockett, Daniel D.; Ammons, Christine G.; Tuttle, Marie S.

    2016-01-01

    Clinical diagnosis of infection in chronic wounds is currently limited to subjective clinical signs and culture-based methods that underestimate the complexity of wound microbial bioburden as revealed by DNA-based microbial identification methods. Here, we use 16S rRNA next generation sequencing and quantitative polymerase chain reaction to characterize weekly changes in bacterial load, community structure, and diversity associated with a chronic venous leg ulcer over the 15-week course of treatment and healing. Our DNA-based methods and detailed sampling scheme reveal that the bacterial bioburden of the wound is unexpectedly dynamic, including changes in the bacterial load and community structure that correlate with wound expansion, antibiotic therapy, and healing. We demonstrate that these multidimensional changes in bacterial bioburden can be summarized using swabs taken prior to debridement, and therefore, can be more easily collected serially than debridement or biopsy samples. Overall, this case illustrates the importance of detailed clinical indicators and longitudinal sampling to determine the pathogenic significance of chronic wound microbial dynamics and guide best use of antimicrobials for improvement of healing outcomes. PMID:25902876

  2. Using a genetic algorithm to abbreviate the Psychopathic Personality Inventory-Revised (PPI-R).

    PubMed

    Eisenbarth, Hedwig; Lilienfeld, Scott O; Yarkoni, Tal

    2015-03-01

    Some self-report measures of personality and personality disorders, including the widely used Psychopathic Personality Inventory-Revised (PPI-R), are lengthy and time-intensive. In recent work, we introduced an automated genetic algorithm (GA)-based method for abbreviating psychometric measures. In Study 1, we used this approach to generate a short (40-item) version of the PPI-R using 3 large-N German student samples (total N = 1,590). The abbreviated measure displayed high convergent correlations with the original PPI-R, and outperformed an alternative measure constructed using a conventional approach. Study 2 tested the convergent and discriminant validity of this short version in a fourth student sample (N = 206) using sensation-seeking and sensitivity to reward and punishment scales, again demonstrating similar convergent and discriminant validity for the PPI-R-40 compared with the full version. In a fifth community sample of North American participants acquired using Amazon Mechanical Turk, the PPI-R-40 showed similarly high convergent correlations, demonstrating stability across language, culture, and data-collection method. Taken together, these studies suggest that the GA approach is a viable method for abbreviating measures of psychopathy, and perhaps personality measures in general. 2015 APA, all rights reserved

  3. Direct determination of neonicotinoid insecticides in an analytically challenging crop such as Chinese chives using selective ELISAs.

    PubMed

    Watanabe, Eiki; Miyake, Shiro

    2018-06-05

    Easy-to-use commercial kit-based enzyme-linked immunosorbent assays (ELISAs) have been used to detect neonicotinoid dinotefuran, clothianidin and imidacloprid in Chinese chives, which are considered a troublesome matrix for chromatographic techniques. Based on their high water solubility, water was used as an extractant. Matrix interference could be avoided substantially just diluting sample extracts. Average recoveries of insecticides from spiked samples were 85-113%, with relative standard deviation of <15%. The concentrations of insecticides detected from the spiked samples with the proposed ELISA methods correlated well with those by the reference high-performance liquid chromatography (HPLC) method. The residues analyzed by the ELISA methods were consistently 1.24 times that found by the HPLC method, attributable to loss of analyte during sample clean-up for HPLC analyses. It was revealed that the ELISA methods can be applied easily to pesticide residue analysis in troublesome matrix such as Chinese chives.

  4. Profiling of adrenocorticotropic hormone and arginine vasopressin in human pituitary gland and tumor thin tissue sections using droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS.

    PubMed

    Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R; Changelian, Armen; Laws, Edward R; Santagata, Sandro; Agar, Nathalie Y R; Van Berkel, Gary J

    2015-08-01

    Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections, using a fully automated droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS system for spatially resolved sampling, HPLC separation, and mass spectrometric detection. Excellent correlation was found between the protein distribution data obtained with this method and data obtained with matrix-assisted laser desorption/ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland. AVP was most abundant in the posterior pituitary gland region (neurohypophysis), and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH-secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH-secreting adenomas and in normal anterior adenohypophysis compared with non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis, as expected. This work reveals that a fully automated droplet-based liquid-microjunction surface-sampling system coupled to HPLC-ESI-MS-MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, including AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity, and specificity of this method support the potential of this basic technology, with further advancement, for assisting surgical decision-making. Graphical Abstract Mass spectrometry based profiling of hormones in human pituitary gland and tumor thin tissue sections.

  5. Validation of a point-of-care (POC) lactate testing device for fetal scalp blood sampling during labor: clinical considerations, practicalities and realities.

    PubMed

    Reif, Philipp; Lakovschek, Ioanna; Tappauf, Carmen; Haas, Josef; Lang, Uwe; Schöll, Wolfgang

    2014-06-01

    Although fetal blood sampling for pH is well established the use of lactate has not been widely adopted. This study validated the performance and utility of a handheld point-of-care (POC) lactate device in comparison with the lactate and pH values obtained by the ABL 800 blood gas analyzer. The clinical performance and influences on accuracy and decision-making criteria were assessed with freshly taken fetal blood scalp samples (n=57) and umbilical cord samples (n=310). Bland-Altman plot was used for data plotting and analyzing the agreement between the two measurement devices and correlation coefficients (R²) were determined using Passing-Bablok regression analysis. Sample processing errors were much lower in the testing device (22.8% vs. 0.5%). Following a preclinical assessment and calibration offset alignment (0.5 mmol/L) the test POC device showed good correlation with the reference method for lactate FBS (R²=0.977, p<0.0001, 95% CI 0.9 59-0.988), arterial cord blood (R²=0.976, p<0.0001, 95% CI 0.967-0.983) and venous cord blood (R²=0.977, p<0.0001, 95% CI 0.968-0.984). A POC device which allows for a calibration adjustment to be made following preclinical testing can provide results that will correlate closely to an incumbent lactate method such as a blood gas analyzer. The use of a POC lactate device can address the impracticality and reality of pH sample collection and testing failures experienced in day to day clinical practice. For the StatStrip Lactate meter we suggest using a lactate cut-off of 5.1 mmol/L for predicting fetal acidosis (pH<7.20).

  6. Relationships between magnetic susceptibility and heavy metals in urban topsoils in the arid region of Isfahan, central Iran

    NASA Astrophysics Data System (ADS)

    Karimi, Rezvan; Ayoubi, Shamsollah; Jalalian, Ahmad; Sheikh-Hosseini, Ahmad Reza; Afyuni, Majid

    2011-05-01

    Recently methods dealing with magnetometry have been proposed as a proper proxy for assessing the heavy metal pollution of soils. A total of 113 topsoil samples were collected from public parks and green strips along the rim of roads with high-density traffic within the city of Isfahan, central Iran. The magnetic susceptibility (χ) of the collected soil samples was measured at both low and high frequency (χlf and χhf) using the Bartington MS2 dual frequency sensor. As, Cd, Cr, Ba, Cu, Mn, Pb, Zn, Sr and V concentrations were measured in the all collected soil samples. Significant correlations were found between Zn and Cu (0.85) and between Zn and Pb (0.84). The χfd value of urban topsoil varied from 0.45% to 7.7%. Low mean value of χfd indicated that the magnetic properties of the samples are predominately contributed by multi-domain grains, rather than by super-paramagnetic particles. Lead, Cu, Zn, and Ba showed positive significant correlations with magnetic susceptibility, but As, Sr, Cd, Mn, Cr and V, had no significant correlation with the magnetic susceptibility. There was a significant correlation between pollution load index (PLI) and χlf. PLI was computed to evaluate the soil environmental quality of selected heavy metals. Moreover, the results of multiple regression analysis between χlf and heavy metal concentrations indicated the LnPb, V and LnCu could explain approximately 54% of the total variability of χlf in the study area. These results indicate the potential of the magnetometric methods to evaluate the heavy metal pollution of soils.

  7. Salivary caffeine concentrations are comparable to plasma concentrations in preterm infants receiving extended caffeine therapy

    PubMed Central

    Liu, Xiaoxi; Rhein, Lawrence M.; Darnall, Robert A.; Corwin, Michael J.; McEntire, Betty L.; Ward, Robert M.; James, Laura P.; Sherwin, Catherine M. T.; Heeren, Timothy C.; Hunt, Carl E.

    2016-01-01

    Aims Caffeine concentrations in preterm infants are usually measured in the blood. However, salivary assays may provide a valid and practical alternative. The present study explored the validity and clinical utility of salivary caffeine concentrations as an alternative to blood concentrations and developed a novel plasma/salivary caffeine distribution model. Methods Paired salivary and plasma samples were obtained in 29 infants. Salivary samples were obtained using a commercially available salivary collection system. Caffeine concentrations in the saliva and plasma were determined using high‐performance liquid chromatography. A population pharmacokinetic (PK) model was developed using NONMEM 7.3. Results The mean (± standard deviation) gestational age (GA) at birth and birth weight were 27.9 ± 2.1 weeks and 1171.6 ± 384.9 g, respectively. Paired samples were obtained at a mean postmenstrual age (PMA) of 35.5 ± 1.1 weeks. The range of plasma caffeine concentrations was 9.5–54.1 μg ml−1, with a mean difference (95% confidence interval) between plasma and salivary concentrations of −0.18 μg ml−1 (−1.90, 1.54). Salivary and plasma caffeine concentrations were strongly correlated (Pearson's correlation coefficient = 0.87, P < 0.001). Caffeine PK in plasma and saliva was simultaneously described by a three‐compartment recirculation model. Current body weight, birth weight, GA, PMA and postnatal age were not significantly correlated with any PK parameter. Conclusions Salivary sampling provides an easy, non‐invasive method for measuring caffeine concentrations. Salivary concentrations correlate highly with plasma concentrations. Caffeine PK in saliva and plasma are well described by a three‐compartment recirculation model. PMID:27145974

  8. Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.

    PubMed

    Galvelis, Raimondas; Sugita, Yuji

    2017-06-13

    The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.

  9. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  10. Report on objective ride quality evaluation

    NASA Technical Reports Server (NTRS)

    Wambold, J. C.; Park, W. H.

    1974-01-01

    The correlation of absorbed power as an objective ride measure to the subjective evaluation for the bus data was investigated. For some individual bus rides the correlations were poor, but when a sufficient number of rides was used to give reasonable sample base, an excellent correlation was obtained. The following logarithmical function was derived: S = 1.7245 1n (39.6849 AP), where S = one subjective rating of the ride; and AP = the absorbed power in watts. A six-degree-of-freedom method developed for aircraft data was completed. Preliminary correlation of absorbed power with ISO standards further enhances the bus ride and absorbed power correlation numbers since the AP's obtained are of the same order of magnitude for both correlations. While it would then appear that one could just use ISO standards, there is no way to add the effect of three degrees of freedom. The absorbed power provides a method of adding the effects due to the three major directions plus the pitch and roll.

  11. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  12. Comparison of enzyme-linked immunosorbent assay and rapid chemiluminescent analyser in the detection of myeloperoxidase and proteinase 3 autoantibodies.

    PubMed

    Pucar, Phillippa A; Hawkins, Carolyn A; Randall, Katrina L; Li, Candice; McNaughton, Euan; Cook, Matthew C

    2017-06-01

    Antibodies to myeloperoxidase (MPO) and proteinase 3 (PR3) are vital in the diagnosis and management of ANCA-associated vasculitis. A chemiluminescent immunoassay (CLIA; Quanta Flash) provides MPO and PR3 antibody results in 30 minutes, which is much faster than enzyme-linked immunosorbent assay (ELISA). We compared the performance of ELISA (Orgentec) and CLIA (Quanta Flash) for MPO and PR3 antibody quantitation on 303 samples, comprising 196 consecutive samples received in a single diagnostic laboratory over a 3 month period, and 107 samples collected from 42 known vasculitis patients over a 40 month period. We observed a correlation between both methods using spearman correlation coefficients (MPO, r s  = 0.63, p < 0.01; PR3, r s  = 0.69, p < 0.01). There was agreement between both methods in determining a positive or negative result. In the vasculitis cohort, CLIA performed well at clinically important stages of disease; diagnosis (eight samples all positive by both assays) and disease relapse (correlation for both MPO and PR3 antibody quantitation r s  = 0.84, p = 0.03 and r s  = 0.78, p < 0.01, respectively). Three samples were discordant at clinical relapse, testing positive by CLIA, including one high positive associated with relapse requiring a change in treatment. In summary, CLIA appears to be at least as accurate as ELISA for measurement of MPO and PR3 antibodies. Copyright © 2017. Published by Elsevier B.V.

  13. Correlations Decrease with Propagation of Spiking Activity in the Mouse Barrel Cortex

    PubMed Central

    Ranganathan, Gayathri Nattar; Koester, Helmut Joachim

    2011-01-01

    Propagation of suprathreshold spiking activity through neuronal populations is important for the function of the central nervous system. Neural correlations have an impact on cortical function particularly on the signaling of information and propagation of spiking activity. Therefore we measured the change in correlations as suprathreshold spiking activity propagated between recurrent neuronal networks of the mammalian cerebral cortex. Using optical methods we recorded spiking activity from large samples of neurons from two neural populations simultaneously. The results indicate that correlations decreased as spiking activity propagated from layer 4 to layer 2/3 in the rodent barrel cortex. PMID:21629764

  14. Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.

    PubMed

    Barrera, Roberto

    2009-07-01

    Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.

  15. Determination of albumin in bronchoalveolar lavage fluid by flow-injection fluorometry using chromazurol S.

    PubMed

    Sato, Takaji; Saito, Yoshihiro; Chikuma, Masahiko; Saito, Yutaka; Nagai, Sonoko

    2008-03-01

    A highly sensitive flow injection fluorometry for the determination of albumin was developed and applied to the determination of albumin in human bronchoalveolar lavage fluids (BALF). This method is based on binding of chromazurol S (CAS) to albumin. The calibration curve was linear in the range of 5-200 microg/ml of albumin. A highly linear correlation (r=0.986) was observed between the albumin level in BALF samples (n=25) determined by the proposed method and by a conventional fluorometric method using CAS (CAS manual method). The IgG interference was lower in the CAS flow injection method than in the CAS manual method. The albumin level in BALF collected from healthy volunteers (n=10) was 58.5+/-13.1 microg/ml. The albumin levels in BALF samples obtained from patients with sarcoidosis and idiopathic pulmonary fibrosis were increased. This finding shows that the determination of albumin levels in BALF samples is useful for investigating lung diseases and that CAS flow injection method is promising in the determination of trace albumin in BALF samples, because it is sensitive and precise.

  16. Comparison of the acetyl bromide spectrophotometric method with other analytical lignin methods for determining lignin concentration in forage samples.

    PubMed

    Fukushima, Romualdo S; Hatfield, Ronald D

    2004-06-16

    Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.

  17. Determination of Porosity in Shale by Double Headspace Extraction GC Analysis.

    PubMed

    Zhang, Chun-Yun; Li, Teng-Fei; Chai, Xin-Sheng; Xiao, Xian-Ming; Barnes, Donald

    2015-11-03

    This paper reports on a novel method for the rapid determination of the shale porosity by double headspace extraction gas chromatography (DHE-GC). Ground core samples of shale were placed into headspace vials and DHE-GC measurements of released methane gas were performed at a given time interval. A linear correlation between shale porosity and the ratio of consecutive GC signals was established both theoretically and experimentally by comparing with the results from the standard helium pycnometry method. The results showed that (a) the porosity of ground core samples of shale can be measured within 30 min; (b) the new method is not significantly affected by particle size of the sample; (c) the uncertainties of measured porosities of nine shale samples by the present method range from 0.31 to 0.46 p.u.; and (d) the results obtained by the DHE-GC method are in a good agreement with those from the standard helium pycnometry method. In short, the new DHE-GC method is simple, rapid, and accurate, making it a valuable tool for shale gas-related research and applications.

  18. Sleep and Behavioral Correlates of Napping among Young Adults: A Survey of First-Year University Students in Madrid, Spain

    ERIC Educational Resources Information Center

    Vela-Bueno, Antonio; Fernandez-Mendoza, Julio; Olavarrieta-Bernardino, Sara; Vgontzas, Alexandros N.; Bixler, Edward O.; de la Cruz-Troca, Juan Jose; Rodriguez-Munoz, Alfredo; Olivan-Palacios, Jesus

    2008-01-01

    Objective: Between November 2002 and March 2003, the authors assessed the prevalence and correlates of napping among Spanish university students. Participants: The sample comprised 1,276 first-year university students; the mean age was 18.74 [plus or minus] 1.24 years, and 35.45% were men. Methods: The study was cross-sectional, and the students…

  19. A Study of Disruptive Behavior Disorders in Puerto Rican Youth: II. Baseline Prevalence, Comorbidity, and Correlates in Two Sites

    ERIC Educational Resources Information Center

    Bird, Hector R.; Davies, Mark; Duarte, Cristiane S.; Shen, Sa; Loeber, Rolf; Canino, Glorisa J.

    2006-01-01

    Objective: This is the second of two associated articles. The prevalence, correlates, and comorbidities of disruptive behavior disorders (DBDs) in two populations are reported. Method: Probability community samples of Puerto Rican boys and girls ages 5-13 years in San Juan, and the south Bronx in New York City are included (n = 2,491). The…

  20. Temporal Changes in the Correlates of U.S. Adolescent Electronic Cigarette Use and Utilization in Tobacco Cessation, 2011 to 2013

    ERIC Educational Resources Information Center

    Lippert, Adam M.

    2017-01-01

    Objective. To examine temporal changes in the correlates of experimental and current e-cigarette use and associations with tobacco quit attempts. Method. Repeated cross-sectional analyses of data from the 2011 (n = 17,741), 2012 (n = 23,194), and 2013 (n = 16,858) National Youth Tobacco Surveys--a nationally representative sample of U.S. middle…

  1. On the Assessment of Psychometric Adequacy in Correlation Matrices.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Shirkey, Edwin C.

    Three techniques for assessing the adequacy of correlation matrices for factor analysis were applied to four examples from the literature. The methods compared were: (1) inspection of the off diagonal elements of the anti-image covariance matrix S(to the 2nd) R(to the -1) and S(to the 2nd); (2) the Measure of Sampling Adequacy (M.S.A.), and (3)…

  2. Simultaneous quantification of withanolides in Withania somnifera by a validated high-performance thin-layer chromatographic method.

    PubMed

    Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S

    2008-01-01

    This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.

  3. Evaluation of the antioxidant power of honey, propolis and royal jelly by amperometric flow injection analysis.

    PubMed

    Buratti, S; Benedetti, S; Cosio, M S

    2007-02-28

    In this paper is described the applicability of a flow injection system, operating with an amperometric detector, for measurement in rapid and simple way the antioxidant power of honey, propolis and royal jelly. The proposed method evaluates the reducing power of selected antioxidant compounds and does not require the use of free radicals or oxidants. Twelve honey, 12 propolis and 4 royal jelly samples of different botanical and geographical origin were evaluated by the electrochemical method and the data were compared with those obtained by the DPPH assay. Since a good correlation was found (R(2)=0.92) the proposed electrochemical method can be successfully employed for the direct, rapid and simple monitoring of the antioxidant power of honeybee products. Furthermore, the total phenolic content of samples was determined by the Folin-Ciocalteau procedure and the characteristic antioxidant activities showed a good correlation with phenolics (R(2)=0.96 for propolis and 0.90 for honey).

  4. Batch experiments versus soil pore water extraction--what makes the difference in isoproturon (bio-)availability?

    PubMed

    Folberth, Christian; Suhadolc, Metka; Scherb, Hagen; Munch, Jean Charles; Schroll, Reiner

    2009-10-01

    Two approaches to determine pesticide (bio-)availability in soils (i) batch experiments with "extraction with an excess of water" (EEW) and (ii) the recently introduced "soil pore water (PW) extraction" of pesticide incubated soil samples have been compared with regard to the sorption behavior of the model compound isoproturon in soils. A significant correlation between TOC and adsorbed pesticide amount was found when using the EEW approach. In contrast, there was no correlation between TOC and adsorbed isoproturon when using the in situ PW extraction method. Furthermore, sorption was higher at all concentrations in the EEW method when comparing the distribution coefficients (K(d)) for both methods. Over all, sorption in incubated soil samples at an identical water tension (-15 kPa) and soil density (1.3 g cm(-3)) appears to be controlled by a complex combination of sorption driving soil parameters. Isoproturon bioavailability was found to be governed in different soils by binding strength and availability of sorption sites as well as water content, whereas the dominance of either one of these factors seems to depend on the individual composition and characteristics of the respective soil sample. Using multiple linear regression analysis we obtained furthermore indications that the soil pore structure is affected by the EEW method due to disaggregation, resulting in a higher availability of pesticide sorption sites than in undisturbed soil samples. Therefore, it can be concluded that isoproturon sorption is overestimated when using the EEW method, which should be taken into account when using data from this approach or similar batch techniques for risk assessment analysis.

  5. Pearson correlation estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.

  6. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs.

    PubMed

    Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong

    2015-12-26

    This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  7. Yukon River King Salmon - Ichthyophonus Pilot Study

    USGS Publications Warehouse

    Kocan, R.M.; Hershberger, P.K.

    2001-01-01

    A method for non-lethal sampling of adult spawning Chinook salmon for Ichthyophonus was developed using known infected fish and live returning spawners. The method consisted of taking punch biopsies of skin and muscle and culturing the biopsy tissue in vitro. A 100% correlation was made between known infected fish and cultured biopsy tissue. 

  8. Spin-echo based diagonal peak suppression in solid-state MAS NMR homonuclear chemical shift correlation spectra

    NASA Astrophysics Data System (ADS)

    Wang, Kaiyu; Zhang, Zhiyong; Ding, Xiaoyan; Tian, Fang; Huang, Yuqing; Chen, Zhong; Fu, Riqiang

    2018-02-01

    The feasibility of using the spin-echo based diagonal peak suppression method in solid-state MAS NMR homonuclear chemical shift correlation experiments is demonstrated. A complete phase cycling is designed in such a way that in the indirect dimension only the spin diffused signals are evolved, while all signals not involved in polarization transfer are refocused for cancellation. A data processing procedure is further introduced to reconstruct this acquired spectrum into a conventional two-dimensional homonuclear chemical shift correlation spectrum. A uniformly 13C, 15N labeled Fmoc-valine sample and the transmembrane domain of a human protein, LR11 (sorLA), in native Escherichia coli membranes have been used to illustrate the capability of the proposed method in comparison with standard 13C-13C chemical shift correlation experiments.

  9. Population models and simulation methods: The case of the Spearman rank correlation.

    PubMed

    Astivia, Oscar L Olvera; Zumbo, Bruno D

    2017-11-01

    The purpose of this paper is to highlight the importance of a population model in guiding the design and interpretation of simulation studies used to investigate the Spearman rank correlation. The Spearman rank correlation has been known for over a hundred years to applied researchers and methodologists alike and is one of the most widely used non-parametric statistics. Still, certain misconceptions can be found, either explicitly or implicitly, in the published literature because a population definition for this statistic is rarely discussed within the social and behavioural sciences. By relying on copula distribution theory, a population model is presented for the Spearman rank correlation, and its properties are explored both theoretically and in a simulation study. Through the use of the Iman-Conover algorithm (which allows the user to specify the rank correlation as a population parameter), simulation studies from previously published articles are explored, and it is found that many of the conclusions purported in them regarding the nature of the Spearman correlation would change if the data-generation mechanism better matched the simulation design. More specifically, issues such as small sample bias and lack of power of the t-test and r-to-z Fisher transformation disappear when the rank correlation is calculated from data sampled where the rank correlation is the population parameter. A proof for the consistency of the sample estimate of the rank correlation is shown as well as the flexibility of the copula model to encompass results previously published in the mathematical literature. © 2017 The British Psychological Society.

  10. Efficient Strategies for Estimating the Spatial Coherence of Backscatter

    PubMed Central

    Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.

    2017-01-01

    The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342

  11. Relations among questionnaire and experience sampling measures of inner speech: a smartphone app study

    PubMed Central

    Alderson-Day, Ben; Fernyhough, Charles

    2015-01-01

    Inner speech is often reported to be a common and central part of inner experience, but its true prevalence is unclear. Many questionnaire-based measures appear to lack convergent validity and it has been claimed that they overestimate inner speech in comparison to experience sampling methods (which involve collecting data at random timepoints). The present study compared self-reporting of inner speech collected via a general questionnaire and experience sampling, using data from a custom-made smartphone app (Inner Life). Fifty-one university students completed a generalized self-report measure of inner speech (the Varieties of Inner Speech Questionnaire, VISQ) and responded to at least seven random alerts to report on incidences of inner speech over a 2-week period. Correlations and pairwise comparisons were used to compare generalized endorsements and randomly sampled scores for each VISQ subscale. Significant correlations were observed between general and randomly sampled measures for only two of the four VISQ subscales, and endorsements of inner speech with evaluative or motivational characteristics did not correlate at all across different measures. Endorsement of inner speech items was significantly lower for random sampling compared to generalized self-report, for all VISQ subscales. Exploratory analysis indicated that specific inner speech characteristics were also related to anxiety and future-oriented thinking. PMID:25964773

  12. Recent advancement in the field of two-dimensional correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Noda, Isao

    2008-07-01

    The recent advancement in the field of 2D correlation spectroscopy is reviewed with the emphasis on a number of papers published during the last two years. Topics covered by this comprehensive review include books, review articles, and noteworthy developments in the theory and applications of 2D correlation spectroscopy. New 2D correlation techniques are discussed, such as kernel analysis and augmented 2D correlation, model-based correlation, moving window analysis, global phase angle, covariance and correlation coefficient mapping, sample-sample correlation, hybrid and hetero correlation, pretreatment and transformation of data, and 2D correlation combined with other chemometrics techniques. Perturbation methods of both static (e.g., temperature, composition, pressure and stress, spatial distribution and orientation) and dynamic types (e.g., rheo-optical and acoustic, chemical reactions and kinetics, H/D exchange, sorption and diffusion) currently in use are examined. Analytical techniques most commonly employed in 2D correlation spectroscopy are IR, Raman, and NIR, but the growing use of other probes is also noted, including fluorescence, emission, Raman optical activity and vibrational circular dichroism, X-ray absorption and scattering, NMR, mass spectrometry, and even chromatography. The field of applications for 2D correlation spectroscopy is very diverse, encompassing synthetic polymers, liquid crystals, Langmuir-Blodgett films, proteins and peptides, natural polymers and biomaterials, pharmaceuticals, food and agricultural products, water, solutions, inorganic, organic, hybrid or composite materials, and many more.

  13. Effect of Inoculation Techniques and Relative Humidity on the Growth of Molds on the Surfaces of Yellow Layer Cakes

    PubMed Central

    Fustier, Patrick; Lafond, Alain; Champagne, Claude P.; Lamarche, François

    1998-01-01

    Four inoculation techniques were compared for initiation of growth on cake surfaces: spot, air cabinet, spray (atomizer), and talc addition methods. Molds were isolated from commercial cakes and were identified as Aspergillus sydowii, Aspergillus ochraceus, Penicillium funiculosum, and Eurotium herbariorum. Cake surfaces were inoculated with mold spores and incubated under three equilibrium relative humidity (ERH) levels: 97, 85, and 75%. Random contamination by spores in a ventilated air cabinet was the simplest method of inoculation, but standard deviations in the inoculation rates (20% on a relative scale) were almost twice those observed with the other methods. The spot method was the most reproducible. Cake samples inoculated in the air cabinet had colony counts 10 times lower than those obtained for potato dextrose agar plates at 97% ERH, which was not the case with the spray and talc methods. Growth of molds was much slower in the samples incubated in 75% relative humidity, with all methods. Colony counts were generally similar in systems adjusted at 85 to 97% ERH but were lower for samples incubated at 75% ERH. In comparisons of the shelf life estimates obtained by the various inoculation methods, a correlation coefficient (r2) of 0.70 was obtained between the spot method and the other methods of inoculation, while talc, air cabinet, and spray shelf life data were correlated better (r2 ≈ 0.97). The spot method appeared to be the method of choice in consideration of ease of use, precision, and the ability to enable the study of the effects of the environment on mold-free shelf life as well as on the rate of growth of molds on cakes. PMID:16349479

  14. [Preliminary study on effective components of Tripterygium wilfordii for liver toxicity based on spectrum-effect correlation analysis].

    PubMed

    Zhao, Xiao-Mei; Pu, Shi-Biao; Zhao, Qing-Guo; Gong, Man; Wang, Jia-Bo; Ma, Zhi-Jie; Xiao, Xiao-He; Zhao, Kui-Jun

    2016-08-01

    In this paper, the spectrum-effect correlation analysis method was used to explore the main effective components of Tripterygium wilfordii for liver toxicity, and provide reference for promoting the quality control of T. wilfordii. Chinese medicine T.wilfordii was taken as the study object, and LC-Q-TOF-MS was used to characterize the chemical components in T. wilfordii samples from different areas, and their main components were initially identified after referring to the literature. With the normal human hepatocytes (LO2 cell line)as the carrier, acetaminophen as positive medicine, and cell inhibition rate as testing index, the simple correlation analysis and multivariate linear correlation analysis methods were used to screen the main components of T. wilfordii for liver toxicity. As a result, 10 kinds of main components were identified, and the spectrum-effect correlation analysis showed that triptolide may be the toxic component, which was consistent with previous results of traditional literature. Meanwhile it was found that tripterine and demethylzeylasteral may greatly contribute to liver toxicity in multivariate linear correlation analysis. T. wilfordii samples of different varieties or different origins showed large difference in quality, and the T. wilfordii from southwest China showed lower liver toxicity, while those from Hunan and Anhui province showed higher liver toxicity. This study will provide data support for further rational use of T. wilfordii and research on its liver toxicity ingredients. Copyright© by the Chinese Pharmaceutical Association.

  15. Analytical Determinations of the Phenolic Content of Dissolved Organic Matter

    NASA Astrophysics Data System (ADS)

    Pagano, T.; Kenny, J. E.

    2010-12-01

    Indicators suggest that the amount of dissolved organic matter (DOM) in natural waters is increasing. Climate Change has been proposed as a potential contributor to the trend, and under this mechanism, the phenolic content of DOM may also be increasing. We have explored the possibility of assessing the phenolic character of DOM using fluorescence spectroscopy as a more convenient alternative to wet chemistry methods. In this work, parallel factor analysis (PARAFAC) was applied to fluorescence excitation emission matrices (EEMs) of humic samples in an attempt to analyze their phenolic content. The PARAFAC results were correlated with phenol concentrations derived from the Folin-Ciocalteau reagent-based method. The reagent-based method showed that the phenolic content of five International Humic Substance Society (IHSS) DOM samples vary from approximately 5 to 22 ppm Tannic Acid Equivalents (TAE) in phenol concentration. A five-component PARAFAC fit was applied to the EEMs of the IHSS sample dataset and it was determined by PARAFAC score correlations with phenol concentrations from the reagent-based method that components C1 (R2=0.78), C4 (R2=0.82), and C5 (R2=0.88) have the highest probability of containing phenolic groups. Furthermore, when the scores of components C4 and C5 were summed, the correlation improved (R2=0.99). Likewise, when the scores of C1, C4, and C5 were summed, their correlations were stronger than their individual parts (R2=0.89). Since the reagent-based method is providing an indicator of “total phenol” amount, regardless of the exact molecular structure of C1, C4, and C5, it seems reasonable that each of these components individually contributes a portion to the summed “total phenol” profile, and that the sum of their phenol-related spectral parts represents a larger portion of the “total phenol” index. However, when the sum of all five components were plotted against the reagent-based phenol concentrations, due to the considerable impact of largely non-phenolic components C2 (R2=0.23) and C3 (R2=0.35), the correlation was quite poor (or no correlation at all with R2=0.10). The results show the potential for PARAFAC analysis of multidimensional fluorescence data to be a tool for monitoring the phenolic content of DOM. Applications include assessing the potential for formation of disinfection byproducts in the treatment of drinking water and monitoring the impact of Climate Change on the phenolic character of DOM.

  16. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  17. Simultaneous determination of glucose, triglycerides, urea, cholesterol, albumin and total protein in human plasma by Fourier transform infrared spectroscopy: direct clinical biochemistry without reagents.

    PubMed

    Jessen, Torben E; Höskuldsson, Agnar T; Bjerrum, Poul J; Verder, Henrik; Sørensen, Lars; Bratholm, Palle S; Christensen, Bo; Jensen, Lene S; Jensen, Maria A B

    2014-09-01

    Direct measurement of chemical constituents in complex biologic matrices without the use of analyte specific reagents could be a step forward toward the simplification of clinical biochemistry. Problems related to reagents such as production errors, improper handling, and lot-to-lot variations would be eliminated as well as errors occurring during assay execution. We describe and validate a reagent free method for direct measurement of six analytes in human plasma based on Fourier-transform infrared spectroscopy (FTIR). Blood plasma is analyzed without any sample preparation. FTIR spectrum of the raw plasma is recorded in a sampling cuvette specially designed for measurement of aqueous solutions. For each analyte, a mathematical calibration process is performed by a stepwise selection of wavelengths giving the optimal least-squares correlation between the measured FTIR signal and the analyte concentration measured by conventional clinical reference methods. The developed calibration algorithms are subsequently evaluated for their capability to predict the concentration of the six analytes in blinded patient samples. The correlation between the six FTIR methods and corresponding reference methods were 0.87

  18. UHPLC-high-resolution mass spectrometry determination of bisphenol A and plastic additives released by polycarbonate tableware: influence of ageing and surface damage.

    PubMed

    Bignardi, Chiara; Cavazza, Antonella; Laganà, Carmen; Salvadeo, Paola; Corradini, Claudio

    2015-10-01

    A new UHPLC-ESI-Orbitrap method for the identification and the quantitative determination of bisphenol A and some common additives employed in plastic manufacturing has been developed and validated. The method has been applied to evaluate the migration from 14 samples of tableware of different age and degree of surface damage, in both ethanol and isooctane (used as food simulants according to EU plastic regulation). Bisphenol A, three UV light absorbers, and one whitening agent were detected and quantified. Data were analyzed with the aim of exploring a possible correlation between bisphenol A and additives release, ageing, and surface integrity. A high correlation between age of samples, surface damage, and bisphenol A migration has been evaluated, while the release of additives was not correlated with other parameters. Obtained data showed for the first time that the release of bisphenol A seems to be more connected to ageing than to scratches and cracks occurrence. Graphical Abstract Bisphenol A and additives released by polycarbonate tableware: influence of ageing and surface damage.

  19. The use of CT density changes at internal tissue interfaces to correlate internal organ motion with an external surrogate

    NASA Astrophysics Data System (ADS)

    Gaede, Stewart; Carnes, Gregory; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-01-01

    The purpose of this paper is to describe a non-invasive method to monitor the motion of internal organs affected by respiration without using external markers or spirometry, to test the correlation with external markers, and to calculate any time shift between the datasets. Ten lung cancer patients were CT scanned with a GE LightSpeed Plus 4-Slice CT scanner operating in a ciné mode. We retrospectively reconstructed the raw CT data to obtain consecutive 0.5 s reconstructions at 0.1 s intervals to increase image sampling. We defined regions of interest containing tissue interfaces, including tumour/lung interfaces that move due to breathing on multiple axial slices and measured the mean CT number versus respiratory phase. Tumour motion was directly correlated with external marker motion, acquired simultaneously, using the sample coefficient of determination, r2. Only three of the ten patients showed correlation higher than r2 = 0.80 between tumour motion and external marker position. However, after taking into account time shifts (ranging between 0 s and 0.4 s) between the two data sets, all ten patients showed correlation better than r2 = 0.8. This non-invasive method for monitoring the motion of internal organs is an effective tool that can assess the use of external markers for 4D-CT imaging and respiratory-gated radiotherapy on a patient-specific basis.

  20. The use of CT density changes at internal tissue interfaces to correlate internal organ motion with an external surrogate.

    PubMed

    Gaede, Stewart; Carnes, Gregory; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-01-21

    The purpose of this paper is to describe a non-invasive method to monitor the motion of internal organs affected by respiration without using external markers or spirometry, to test the correlation with external markers, and to calculate any time shift between the datasets. Ten lung cancer patients were CT scanned with a GE LightSpeed Plus 4-Slice CT scanner operating in a ciné mode. We retrospectively reconstructed the raw CT data to obtain consecutive 0.5 s reconstructions at 0.1 s intervals to increase image sampling. We defined regions of interest containing tissue interfaces, including tumour/lung interfaces that move due to breathing on multiple axial slices and measured the mean CT number versus respiratory phase. Tumour motion was directly correlated with external marker motion, acquired simultaneously, using the sample coefficient of determination, r(2). Only three of the ten patients showed correlation higher than r(2) = 0.80 between tumour motion and external marker position. However, after taking into account time shifts (ranging between 0 s and 0.4 s) between the two data sets, all ten patients showed correlation better than r(2) = 0.8. This non-invasive method for monitoring the motion of internal organs is an effective tool that can assess the use of external markers for 4D-CT imaging and respiratory-gated radiotherapy on a patient-specific basis.

  1. Correlation of parents' religious behavior with family's emotional relations and students' self-actualization.

    PubMed

    Poorsheikhali, Fatemah; Alavi, Hamid Reza

    2015-02-01

    The main goal of this research is to study the relationship between parents' religious behavior, emotional relations inside family, and self-actualization of male and female high school students of district 2 in Kerman city. Research method is descriptive and of correlative type. Questionnaires of parent's religious behavior, emotional relations inside family, and students' self-actualization were used in the research. After collecting questionnaires, data were analyzed by SPSS, MINITAB, and EXCEL software. The sample volume in the research has been 309 students and their parents, and the sampling method was in the form of classification and then in the form of cluster in two stages. 1.29 % of students had a low self-actualization, 17.15 % had average, and 81.55 % of them had high self-actualization. Also the results showed that 9.4 % of emotional relations in families were undesirable, 55.3 % were relatively desirable, and 35.3 % were desirable. Moreover, 2.27 % of parents' religious behavior was inappropriate, 29.13 % was relatively appropriate, and 68.61 % was appropriate. The main results of the research are as follows: (1) There is a significant positive correlation between parents' religious behavior and emotional relations inside students' family. (2) There is not any significant correlational between parents' religious behavior and students' self-actualization. (3) There is a significant positive correlation between emotional relations inside family and students' self-actualization.

  2. The Eysenckian personality factors and their correlations with academic performance.

    PubMed

    Poropat, Arthur E

    2011-03-01

    BACKGROUND. The relationship between personality and academic performance has long been explored, and a recent meta-analysis established that measures of the five-factor model (FFM) dimension of Conscientiousness have similar validity to intelligence measures. Although currently dominant, the FFM is only one of the currently accepted models of personality, and has limited theoretical support. In contrast, the Eysenckian personality model was developed to assess a specific theoretical model and is still commonly used in educational settings and research. AIMS. This meta-analysis assessed the validity of the Eysenckian personality measures for predicting academic performance. SAMPLE. Statistics were obtained for correlations with Psychoticism, Extraversion, and Neuroticism (20-23 samples; N from 8,013 to 9,191), with smaller aggregates for the Lie scale (7 samples; N= 3,910). METHODS. The Hunter-Schmidt random effects method was used to estimate population correlations between the Eysenckian personality measures and academic performance. Moderating effects were tested using weighted least squares regression. RESULTS. Significant but modest validities were reported for each scale. Neuroticism and Extraversion had relationships with academic performance that were consistent with previous findings, while Psychoticism appears to be linked to academic performance because of its association with FFM Conscientiousness. Age and educational level moderated correlations with Neuroticism and Extraversion, and gender had no moderating effect. Correlations varied significantly based on the measurement instrument used. CONCLUSIONS. The Eysenckian scales do not add to the prediction of academic performance beyond that provided by FFM scales. Several measurement problems afflict the Eysenckian scales, including low to poor internal reliability and complex factor structures. In particular, the measurement and validity problems of Psychoticism mean its continued use in academic settings is unjustified. © 2010 The Author. British Journal of Educational Psychology. © 2010 The British Psychological Society.

  3. Phonetic Measures of Reduced Tongue Movement Correlate with Negative Symptom Severity in Hospitalized Patients with First-Episode Schizophrenia-Spectrum Disorders

    PubMed Central

    Covington, Michael A.; Lunden, S.L. Anya; Cristofaro, Sarah L.; Wan, Claire Ramsay; Bailey, C. Thomas; Broussard, Beth; Fogarty, Robert; Johnson, Stephanie; Zhang, Shayi; Compton, Michael T.

    2012-01-01

    Background Aprosody, or flattened speech intonation, is a recognized negative symptom of schizophrenia, though it has rarely been studied from a linguistic/phonological perspective. To bring the latest advances in computational linguistics to the phenomenology of schizophrenia and related psychotic disorders, a clinical first-episode psychosis research team joined with a phonetics/computational linguistics team to conduct a preliminary, proof-of-concept study. Methods Video recordings from a semi-structured clinical research interview were available from 47 first-episode psychosis patients. Audio tracks of the video recordings were extracted, and after review of quality, 25 recordings were available for phonetic analysis. These files were de-noised and a trained phonologist extracted a 1-minute sample of each patient’s speech. WaveSurfer 1.8.5 was used to create, from each speech sample, a file of formant values (F0, F1, F2, where F0 is the fundamental frequency and F1 and F2 are resonance bands indicating the moment-by-moment shape of the oral cavity). Variability in these phonetic indices was correlated with severity of Positive and Negative Syndrome Scale negative symptom scores using Pearson correlations. Results A measure of variability of tongue front-to-back position—the standard deviation of F2—was statistically significantly correlated with the severity of negative symptoms (r=−0.446, p=0.03). Conclusion This study demonstrates a statistically significant and meaningful correlation between negative symptom severity and phonetically measured reductions in tongue movements during speech in a sample of first-episode patients just initiating treatment. Further studies of negative symptoms, applying computational linguistics methods, are warranted. PMID:23102940

  4. Adenosine Triphosphate Quantification Correlates Poorly with Microbial Contamination of Duodenoscopes.

    PubMed

    Olafsdottir, Lovisa B; Wright, Sharon B; Smithey, Anne; Heroux, Riley; Hirsch, Elizabeth B; Chen, Alice; Lane, Benjamin; Sawhney, Mandeep S; Snyder, Graham M

    2017-06-01

    OBJECTIVE The aim of this study was to quantify the correlation between adenosine triphosphate (ATP) measurements and bacterial cultures from duodenoscopes for evaluation of contamination following high-level disinfection. DESIGN Duodenoscopes used for any intended endoscopic retrograde cholangiopancreatography (ERCP) procedure were included. Microbiologic and ATP data were collected concomitantly and in the same manner from ERCP duodenoscopes. SETTING A high-volume endoscopy unit at a tertiary referral acute-care facility. METHODS Duodenoscopes were sampled for ATP and bacterial contamination in a contemporaneous and highly standardized fashion using a "flush-brush-flush" method for the working channel (WC) and a dry flocked swab for the elevator mechanism (EM). Specimens were processed for any aerobic bacterial growth (colony-forming units, CFU). Growth of CFU>0 and ATP relative light unit (RLU)>0 was considered a contaminated result. Frequency of discord between among WC and EM measurements were calculated using 2×2 contingency tables. The Spearman correlation coefficient was used to calculate the relatedness of bacterial contamination and ATP as continuous measurements. RESULTS The Spearman correlation coefficient did not demonstrate significant relatedness between ATP and CFU for either a WC or EM site. Among 390 duodenoscope sampling events, ATP and CFU assessments of contamination were discordant in 82 of 390 WC measurements (21%) and 331 of 390 of EM measurements (84.9%). The EM was frequently and markedly positive by ATP measurement. CONCLUSION ATP measurements correlate poorly with a microbiologic standard assessing duodenoscope contamination, particularly for EM sampling. ATP may reflect biological material other than nonviable aerobic bacteria and may not serve as an adequate marker of bacterial contamination. Infect Control Hosp Epidemiol 2017;38:678-684.

  5. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion.

    PubMed

    Wilson, Jordan L; Samaranayake, V A; Limmer, Matt A; Burken, Joel G

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman's correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years.

  6. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion

    USGS Publications Warehouse

    Wilson, Jordan L.; Samaranayake, V.A.; Limmer, Matthew A.; Burken, Joel G.

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman’s correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years.

  7. Phytoforensics: Trees as bioindicators of potential indoor exposure via vapor intrusion

    PubMed Central

    2018-01-01

    Human exposure to volatile organic compounds (VOCs) via vapor intrusion (VI) is an emerging public health concern with notable detrimental impacts on public health. Phytoforensics, plant sampling to semi-quantitatively delineate subsurface contamination, provides a potential non-invasive screening approach to detect VI potential, and plant sampling is effective and also time- and cost-efficient. Existing VI assessment methods are time- and resource-intensive, invasive, and require access into residential and commercial buildings to drill holes through basement slabs to install sampling ports or require substantial equipment to install groundwater or soil vapor sampling outside the home. Tree-core samples collected in 2 days at the PCE Southeast Contamination Site in York, Nebraska were analyzed for tetrachloroethene (PCE) and results demonstrated positive correlations with groundwater, soil, soil-gas, sub-slab, and indoor-air samples collected over a 2-year period. Because tree-core samples were not collocated with other samples, interpolated surfaces of PCE concentrations were estimated so that comparisons could be made between pairs of data. Results indicate moderate to high correlation with average indoor-air and sub-slab PCE concentrations over long periods of time (months to years) to an interpolated tree-core PCE concentration surface, with Spearman’s correlation coefficients (ρ) ranging from 0.31 to 0.53 that are comparable to the pairwise correlation between sub-slab and indoor-air PCE concentrations (ρ = 0.55, n = 89). Strong correlations between soil-gas, sub-slab, and indoor-air PCE concentrations and an interpolated tree-core PCE concentration surface indicate that trees are valid indicators of potential VI and human exposure to subsurface environment pollutants. The rapid and non-invasive nature of tree sampling are notable advantages: even with less than 60 trees in the vicinity of the source area, roughly 12 hours of tree-core sampling with minimal equipment at the PCE Southeast Contamination Site was sufficient to delineate vapor intrusion potential in the study area and offered comparable delineation to traditional sub-slab sampling performed at 140 properties over a period of approximately 2 years. PMID:29451904

  8. Pearson's chi-square test and rank correlation inferences for clustered data.

    PubMed

    Shih, Joanna H; Fay, Michael P

    2017-09-01

    Pearson's chi-square test has been widely used in testing for association between two categorical responses. Spearman rank correlation and Kendall's tau are often used for measuring and testing association between two continuous or ordered categorical responses. However, the established statistical properties of these tests are only valid when each pair of responses are independent, where each sampling unit has only one pair of responses. When each sampling unit consists of a cluster of paired responses, the assumption of independent pairs is violated. In this article, we apply the within-cluster resampling technique to U-statistics to form new tests and rank-based correlation estimators for possibly tied clustered data. We develop large sample properties of the new proposed tests and estimators and evaluate their performance by simulations. The proposed methods are applied to a data set collected from a PET/CT imaging study for illustration. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  9. Performance evaluation of the new hematology analyzer Sysmex XN-series.

    PubMed

    Seo, J Y; Lee, S-T; Kim, S-H

    2015-04-01

    The Sysmex XN-series is a new automated hematology analyzer designed to improve the accuracy of cell counts and the specificity of the flagging events. The basic characteristics and the performance of new measurement channels of the XN were evaluated and compared with the Sysmex XE-2100 and the manual method. Fluorescent platelet count (PLT-F) was compared with the flow cytometric method. The low WBC mode and body fluid mode were also evaluated. For workflow analysis, 1005 samples were analyzed on both the XN and the XE-2100, and manual review rates were compared. All parameters measured by the XN correlated well with the XE-2100. PLT-F showed better correlation with the flow cytometric method (r(2)  = 0.80) compared with optical platelet count (r(2)  = 0.73) for platelet counts <70 × 10(9) /L. The low WBC mode reported accurate leukocyte differentials for samples with a WBC count <0.5 × 10(9) /L. Relatively good correlation was found for WBC counts between the manual method and the body fluid mode (r = 0.88). The XN made less flags than the XE-2100, while the sensitivities of both instruments were comparable. The XN provided reliable results on low cell counts, as well as reduced manual blood film reviews, while maintaining a proper level of diagnostic sensitivity. © 2014 John Wiley & Sons Ltd.

  10. A new experimental correlation for non-Newtonian behavior of COOH-DWCNTs/antifreeze nanofluid

    NASA Astrophysics Data System (ADS)

    Izadi, Farhad; Ranjbarzadeh, Ramin; Kalbasi, Rasool; Afrand, Masoud

    2018-04-01

    In this paper, the rheological behavior of nano-antifreeze consisting of 50%vol. water, 50%vol. ethylene glycol and different quantities of functionalized double walled carbon nanotubes has been investigated experimentally. Initially, nano-antifreeze samples were prepared with solid volume fractions of 0.05, 0.1, 0.2, 0.4, 0.6, 0.8 and 1% using two-step method. Then, the dynamic viscosity of the nano-antifreeze samples was measured at different shear rates and temperatures. At this stage, the results showed that base fluid had the Newtonian behavior, while the behavior of all nano-antifreeze samples was non-Newtonian. Since the behavior of the samples was similar to power law model, it was attempted to find the constants of this model including consistency index and power law index. Therefore, using the measured viscosity and shear rates, consistency index and power law index were obtained by curve-fitting method. The obtained values showed that consistency index amplified with increasing volume fraction, while reduced with enhancing temperature. Besides, the obtained values for power law index were less than 1 for all samples which means shear thinning behavior. Lastly, new correlations were suggested to estimate the consistency index and power law index using curve-fitting.

  11. Fingerprint chromatogram analysis of Pseudostellaria heterophylla (Miq.) Pax root by high performance liquid chromatography.

    PubMed

    Han, Chao; Chen, Junhui; Chen, Bo; Lee, Frank Sen-Chun; Wang, Xiaoru

    2006-09-01

    A simple and reliable high performance liquid chromatographic (HPLC) method has been developed and validated for the fingerprinting of extracts from the root of Pseudostellaria heterophylla (Miq.) Pax. HPLC with gradient elution was performed on an authentic reference standard of powdered P. heterophylla (Miq.) Pax root and 11 plant samples of the root were collected from different geographic locations. The HPLC chromatograms have been standardized through the selection and identification of reference peaks and the normalization of retention times and peak intensities of all the common peaks. The standardized HPLC fingerprints show high stability and reproducibility, and thus can be used effectively for the screening analysis or quality assessment of the root or its derived products. Similarity index calculations based on cosine angle values or correlation methods have been performed on the HPLC fingerprints. As a group, the fingerprints of the P. heterophylla (Miq.) Pax samples studied are highly correlated with closely similar fingerprints. Within the group, the samples can be further divided into subgroups based on hierarchical clustering analysis (HCA). Sample grouping based on HCA coincides nicely with those based on the geographical origins of the samples. The HPLC fingerprinting techniques thus have high potential in authentication or source-tracing types of applications.

  12. [A thermodynamic study on bovine spermatozoa by microcalorimetry after Percoll density-gradient centrifugation - experimental probe of its utility in andrology].

    PubMed

    Fischer, C; Scherfer-Brähler, V; Müller-Schlösser, F; Schröder-Printzen, I; Weidner, W

    2007-05-01

    Microcalorimetric measurements can be used for recording exothermic or endothermic summation effects of a great variety of biological processes. The aim of the present study was to examine the usefullness of the microcalorimetry method to characterise the biological activity of spermatozoa. The heat flow of bovine fresh sperm as well as cryosperm samples were measured after Percoll density-gradient centrifugation in a 4-channel microcalorimeter. Various calibration times, volumes of samples and sperm concentrations were tested and analysed. Sperm concentration was recorded by a computer-assisted, computer-aided software system method (CASA). Using a calibration time of 15 minutes, the heat signal of the fresh and cryosperm samples showed a characteristic peak after 39.5 min and 38.1 min (mean), respectively, with a significant correlation to sample volume and sperm concentration (p < 0.05). For obtaining the best results, a sample volume of 1 ml and a sperm concentration of more than 50 x 10 (6)/mL was used. With microcalorimetric measurements the biological activity of spermatozoa could be recorded for reproducible results, thus opening the way to an automatised ejaculate analysis in the future. More investigations are necessary to correlate microcalorimetric parameters with semen function.

  13. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    PubMed

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  14. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    PubMed

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the anthocyanin concentrations in both floral and vegetative tissues. This method is efficient, completely noninvasive, applicable to both uniform and patterned color, and works with samples of any size.

  15. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Assessing SOC labile fractions through respiration test, density-size fractionation and thermal analysis - A comparison of methods

    NASA Astrophysics Data System (ADS)

    Soucemarianadin, Laure; Cécillon, Lauric; Chenu, Claire; Baudin, François; Nicolas, Manuel; Savignac, Florence; Barré, Pierre

    2017-04-01

    Soil organic matter (SOM) is the biggest terrestrial carbon reservoir, storing 3 to 4 times more carbon than the atmosphere. However, despite its major importance for climate regulation SOM dynamics remains insufficiently understood. For instance, there is still no widely accepted method to assess SOM lability. Soil respiration tests and particulate organic matter (POM) obtained by different fractionation schemes have been used for decades and are now considered as classical estimates of very labile and labile soil organic carbon (SOC), respectively. But the pertinence of these methods to characterize SOM turnover can be questioned. Moreover, they are very time-consuming and their reproducibility might be an issue. Alternate ways of determining the labile SOC component are thus well-needed. Thermal analyses have been used to characterize SOM among which Rock-Eval 6 (RE6) analysis of soil has shown promising results in the determination of SOM biogeochemical stability (Gregorich et al., 2015; Barré et al., 2016). Using a large set of samples of French forest soils representing contrasted pedoclimatic conditions, including deep samples (up to 1 m depth), we compared different techniques used for SOM lability assessment. We explored whether results from soil respiration test (10-week laboratory incubations), SOM size-density fractionation and RE6 thermal analysis were comparable and how they were correlated. A set of 222 (respiration test and RE6), 103 (SOM fractionation and RE6) and 93 (respiration test, SOM fractionation and RE6) forest soils samples were respectively analyzed and compared. The comparison of the three methods (n = 93) using a principal component analysis separated samples from the surface (0-10 cm) and deep (40-80 cm) layers, highlighting a clear effect of depth on the short-term persistence of SOC. A correlation analysis demonstrated that, for these samples, the two classical methods of labile SOC determination (respiration and SOM fractionation) were only weakly positively correlated (Spearman's ρ = 0.26, n = 93). Similarly, soil respiration had only a weak negative correlation (Spearman's ρ = -0.24, n = 93; ρ = -0.33, n = 222) with the RE6 parameter T50 CH pyrolysis. This parameter, previously used as an indicator of labile SOC (Gregorich et al., 2015), represents the temperature at which 50% of the OM was pyrolyzed to effluents (mainly hydrocarbons) during the pyrolysis phase of RE6. Conversely, POC content (% of total SOC) showed a higher negative correlation with T50 CH pyrolysis (ρ = -0.66, n = 93; ρ = -0.65, n = 103) and was positively and negatively correlated to the hydrogen index, HI (mg HC/g TOC; ρ = 0.56/0.53) and the oxygen index, OI (mg CO2/g TOC; ρ = -0.63/-0.62) respectively. Our results showed that RE6 results are consistent with respiration and fractionation results: SOC with higher respiration rate and higher POC content burns at a lower temperature. RE6 thermal analysis could therefore be viewed as a useful fast and cost effective alternative to more time-consuming methods used in SOM fractions determination. Barré, P. et al. Biogeochemistry 2016, 1-12, 130. Gregorich, E.G. et al. Soil Biol. Biochem. 2015, 182-191, 91.

  17. Influence of Head Teachers' General and Instructional Supervisory Practices on Teachers' Work Performance in Secondary Schools in Entebbe Municipality, Wakiso District, Uganda

    ERIC Educational Resources Information Center

    Jared, Nzabonimpa Buregeya

    2011-01-01

    The study examined the Influence of Secondary School Head Teachers' General and Instructional Supervisory Practices on Teachers' Work Performance. Qualitative and qualitative methods with a descriptive-correlational research approach were used in the study. Purposive sampling technique alongside random sampling technique was used to select the…

  18. Relationship of Pupils' Spatial Perception and Ability with Their Performance in Geography

    ERIC Educational Resources Information Center

    Likouri, Anna-Aikaterini; Klonari, Aikaterini; Flouris, George

    2017-01-01

    The aim of this study was to investigate the correlation between pupils' spatial perception and abilities and their performance in geography. The sample was 600 6th-grade pupils from various areas of Greece selected by the cluster sampling method. The study results showed that: a) the vast majority of pupils showed low spatial ability; b) there…

  19. Determination of consolidation properties using electrical resistivity

    NASA Astrophysics Data System (ADS)

    Kibria, Golam; Hossain, Sahadat; Khan, Mohammad Sadik

    2018-05-01

    Electrical conductivity is an indirect method used to evaluate pore-structures and their influence on macro and microscale behavior of soils. Although this method can provide useful information about the consolidation properties of soil samples, insufficient studies have been conducted to identify correlations between electrical and consolidation properties. The current study presents electrical resistivity responses of clayey samples at different consolidation stages. The consolidation properties of four soil specimens were measured in conjunction with electrical conductivity. Scanning electron microscope (SEM) analyses were performed on soil samples before and after consolidation to identify the changes in fabric morphology due to the application of loads. It was observed that the electrical conductivity of samples decreased with the increase of pressure, and the trends of variations were similar to e vs. logP curves. Although a linear correlation exists between electrical conductivity and void ratio, the relationship depends on the structural changes in clay particles. Therefore, changes in fabric structures were analyzed using SEM images, and results showed that the aspect ratio of the particles increased as much as 18.3% after consolidation. Based on the investigation, the coefficient of consolidations and one-dimensional strain were determined using electrical resistivity measurements.

  20. Application of end-tidal carbon dioxide monitoring via distal gas samples in ventilated neonates.

    PubMed

    Jin, Ziying; Yang, Maoying; Lin, Ru; Huang, Wenfang; Wang, Jiangmei; Hu, Zhiyong; Shu, Qiang

    2017-08-01

    Previous research has suggested correlations between the end-tidal partial pressure of carbon dioxide (P ET CO 2 ) and the partial pressure of arterial carbon dioxide (PaCO 2 ) in mechanically ventilated patients, but both the relationship between P ET CO 2 and PaCO 2 and whether P ET CO 2 accurately reflects PaCO 2 in neonates and infants are still controversial. This study evaluated remote sampling of P ET CO 2 via an epidural catheter within an endotracheal tube to determine the procedure's clinical safety and efficacy in the perioperative management of neonates. Abdominal surgery was performed under general anesthesia in 86 full-term newborns (age 1-30 days, weight 2.55-4.0 kg, American Society of Anesthesiologists class I or II). The infants were divided into 2 groups (n = 43 each), and carbon dioxide (CO 2 ) gas samples were collected either from the conventional position (the proximal end) or a modified position (the distal end) of the epidural catheter. The P ET CO 2 measured with the new method was significantly higher than that measured with the traditional method, and the difference between P ET CO 2 and PaCO 2 was also reduced. The accuracy of P ET CO 2 measured increased from 78.7% to 91.5% when the modified sampling method was used. The moderate correlation between P ET CO 2 and PaCO 2 by traditional measurement was 0.596, which significantly increased to 0.960 in the modified sampling group. Thus, the P ET CO 2 value was closer to that of PaCO 2 . P ET CO 2 detected via modified carbon dioxide monitoring had a better accuracy and correlation with PaCO 2 in neonates. Copyright © 2017. Published by Elsevier B.V.

  1. [Optic density of gastric content in the rapid assessment of pulmonary maturity of the newborn].

    PubMed

    Brandell, L; Sepúlveda, W H; Araneda, H; Mangiamarchi, M

    1991-01-01

    The authors studied the correlation between optical density at 650 nm (OD650) in paired samples of amniotic fluid and newborn gastric aspirate obtained from 50 pregnant women who underwent cesarean delivery and from their respective newborns. There was a good correlation between both samples (r = 0.92), which demonstrates that OD650 is a reliable method for studying neonatal pulmonary maturity, especially in those high-risk cases in which no prenatal study is available. None of the infants with mature readings (OD650 > or = 0.10) and 6 out of 13 infants with immature readings (OD650 < 0.10) developed RDS.

  2. Cancer-Related Fatigue and Its Associations with Depression and Anxiety: A Systematic Review

    PubMed Central

    Brown, Linda F.; Kroenke, Kurt

    2010-01-01

    Background Fatigue is an important symptom in cancer and has been shown to be associated with psychological distress. Objectives This review assesses evidence regarding associations of CRF with depression and anxiety. Methods Database searches yielded 59 studies reporting correlation coefficients or odds ratios. Results Combined sample size was 12,103. Average correlation of fatigue with depression, weighted by sample size, was 0.56 and for anxiety, 0.46. Thirty-one instruments were used to assess fatigue, suggesting a lack of consensus on measurement. Conclusion This review confirms the association of fatigue with depression and anxiety. Directionality needs to be better delineated in longitudinal studies. PMID:19855028

  3. Transcultural adaptation and psychometric properties of Spanish version of Pregnancy Physical Activity Questionnaire: the PregnActive project.

    PubMed

    Oviedo-Caro, Miguel Ángel; Bueno-Antequera, Javier; Munguía-Izquierdo, Diego

    2018-03-19

    To transculturally adapt the Spanish version of Pregnancy Physical Activity Questionnaire (PPAQ) analyzing its psychometric properties. The PPAQ was transculturally adapted into Spanish. Test-retest reliability was evaluated in a subsample of 109 pregnant women. The validity was evaluated in a sample of 208 pregnant women who answered the questionnaire and wore the multi-sensor monitor for 7 valid days. The reliability (intraclass correlation coefficient), concordance (concordance correlation coefficient), correlation (Pearson correlation coefficient), agreement (Bland-Altman plots) and relative activity levels (Jonckheere-Terpstra test) between both administrations and methods were examined. Intraclass correlation coefficients between both administrations were good for all categories except transportation. A low but significant correlation was found for total activity (light and above) whereas no correlation was found for other intensities between both methods. Relative activity levels analysis showed a significant linear trend for increased total activity between both methods. Spanish version of PPAQ is a brief and easily interpretable questionnaire with good reliability and ability to rank individuals, and poor validity compared with multi-sensor monitor. The use of PPAQ provides information of pregnancy-specific activities in order to establish physical activity levels of pregnant women and adapt health promotion interventions. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Raman spectroscopy as a tool to understand Kerogen production potential

    NASA Astrophysics Data System (ADS)

    Khatibi, S.; Ostadhassan, M.; Mohammed, R. A.; Alexeyev, A.

    2017-12-01

    A lot attention has given to unconventional reservoirs specifically oil shale in North America during the last decades. Understanding Kerogen properties in terms of maturity and production potential are crucial for unconventional reservoir. Since, the amount of hydrocarbon generation is a function of kerogen type and content in the formation, and the magnitude and duration in which heat and pressure were applied. This study presents a non-destructive and fast method to determine Kerogen properties in terms of Rock-Eval parameters by means of Raman Spectroscopy. Samples were gathered from upper and lower Bakken formation, with different maturities at different depth. Raman spectroscopy as a powerful nondestructive analytical tool for molecular reconstruction was employed to find Raman spectra of different samples. In the next step, Rock-Eval was performed for each sample and different measurements were made. Then in an original approach, correlation between Rock-Eval parameters with Raman Spectroscopy results was established to fully understand how kerogen productivity potentials can be reflected on the Raman response. Results showed, maturity related parameters (RO, Tmax), S1 (already generated oil in the rock), S2 (potential hydrocarbon) and OSI (oil saturation index as indication of potential oil flow zones) can be correlated to band separation, D band intensity, G band intensity and G/D intensity, respectively. Proposed method provide a fast nondestructive method to evaluate Kerogen quality even at field without any special sample preparation.

  5. Data for factor analysis of hydro-geochemical characteristics of groundwater resources in Iranshahr.

    PubMed

    Biglari, Hamed; Saeidi, Mehdi; Karimyan, Kamaleddin; Narooie, Mohammad Reza; Sharafi, Hooshmand

    2018-08-01

    Detection of Hydrogeological and Hydro-geochemical changes affecting the quality of aquifer water is very important. The aim of this study was to determine the factor analysis of the hydro-geochemical characteristics of Iranshahr underground water resources during the warm and cool seasons. In this study, 248 samples (two-time repetitions) of ground water resources were provided at first by cluster-random sampling method during 2017 in the villages of Iranshahr city. After transferring the samples to the laboratory, concentrations of 13 important chemical parameters in those samples were determined according to o water and wastewater standard methods. The results of this study indicated that 45.45% and 55.55% of the correlation between parameters has had a significant decrease and increase, respectively with the transition from warm seasons to cold seasons. According to the factor analysis method, three factors of land hydro-geochemical processes, supplying resources by surface water and sewage as well as human activities have been identified as influential on the chemical composition of these resources.The highest growth rate of 0.37 was observed between phosphate and nitrate ions while the lowest trend of - 0.33 was seen between fluoride ion and calcium as well as chloride ions. Also, a significant increase in the correlation between magnesium ion and nitrate ion from warm seasons to cold seasons indicates the high seasonal impact of the relation between these two parameters.

  6. Development of a rapid optic bacteria detecting system based on ATP bioluminescence

    NASA Astrophysics Data System (ADS)

    Liu, Jun Tao; Luo, JinPing; Liu, XiaoHong; Cai, XinXia

    2014-12-01

    A rapid optic bacteria detecting system based on the principle of Adenosine triphosphate(ATP) bioluminescence was presented in this paper. This system consisted of bioluminescence-based biosensor and the high-sensitivity optic meter. A photon counting photomultiplier tube (PMT) module was used to improve the detection sensitivity, and a NIOS II/f processor based on a Field Programmable Gate Array(FPGA) was used to control the system. In this work, Micrococcus luteus were chosen as the test sample. Several Micrococcus luteus suspension with different concentration was tested by both T2011 and plate counting method. By comparing the two group results, an calibration curve was obtained from the bioluminescence intensity for Micrococcus luteus in the range of 2.3×102 ~ 2.3×106 CFU/mL with a good correlation coefficient of 0.960. An impacting Air microorganism sampler was used to capture Airborne Bacteria, and 8 samples were collected in different place. The TBC results of 8 samples by T2011 were between 10 ~ 2×103 cfu/mL, consistent with that of plate counting method, which indicated that 8 samples were between 10 ~ 3×103 cfu/mL. For total airborne bacteria count was small, correlation coefficient was poor. Also no significant difference was found between T2011 and plate counting method by statistical analyses.

  7. Macro elemental analysis of food samples by nuclear analytical technique

    NASA Astrophysics Data System (ADS)

    Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.

    2017-06-01

    Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.

  8. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less

  9. External validity of the pediatric cardiac quality of life inventory

    PubMed Central

    Marino, Bradley S.; Drotar, Dennis; Cassedy, Amy; Davis, Richard; Tomlinson, Ryan S.; Mellion, Katelyn; Mussatto, Kathleen; Mahony, Lynn; Newburger, Jane W.; Tong, Elizabeth; Cohen, Mitchell I.; Helfaer, Mark A.; Kazak, Anne E.; Wray, Jo; Wernovsky, Gil; Shea, Judy A.; Ittenbach, Richard

    2012-01-01

    Purpose The Pediatric Cardiac Quality of Life Inventory (PCQLI) is a disease-specific, health-related quality of life (HRQOL) measure for pediatric heart disease (HD). The purpose of this study was to demonstrate the external validity of PCQLI scores. Methods The PCQLI development site (Development sample) and six geographically diverse centers in the United States (Composite sample) recruited pediatric patients with acquired or congenital HD. Item response option variability, scores [Total (TS); Disease Impact (DI) and Psychosocial Impact (PI) subscales], patterns of correlation, and internal consistency were compared between samples. Results A total of 3,128 patients and parent participants (1,113 Development; 2,015 Composite) were analyzed. Response option variability patterns of all items in both samples were acceptable. Inter-sample score comparisons revealed no differences. Median item–total (Development, 0.57; Composite, 0.59) and item–subscale (Development, DI 0.58, PI 0.59; Composite, DI 0.58, PI 0.56) correlations were moderate. Subscale–subscale (0.79 for both samples) and subscale–total (Development, DI 0.95, PI 0.95; Composite, DI 0.95, PI 0.94) correlations and internal consistency (Development, TS 0.93, DI 0.90, PI 0.84; Composite, TS 0.93, DI 0.89, PI 0.85) were high in both samples. Conclusion PCQLI scores are externally valid across the US pediatric HD population and may be used for multi-center HRQOL studies. PMID:21188538

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, T.F.; Thorne, P.G.; Myers, K.F.

    Salting-out solvent extraction (SOE) was compared with cartridge and membrane solid-phase extraction (SPE) for preconcentration of nitroaromatics, nitramines, and aminonitroaromatics prior to determination by reversed-phase high-performance liquid chromatography. The solid phases used were manufacturer-cleaned materials, Porapak RDX for the cartridge method and Empore SDB-RPS for the membrane method. Thirty-three groundwater samples from the Naval Surface Warfare Center, Crane, Indiana, were analyzed using the direct analysis protocol specified in SW846 Method 8330, and the results were compared with analyses conducted after preconcentration using SOE with acetonitrile, cartridge-based SPE, and membrane-based SPE. For high-concentration samples, analytical results from the three preconcentration techniquesmore » were compared with results from the direct analysis protocol; good recovery of all target analytes was achieved by all three pre-concentration methods. For low-concentration samples, results from the two SPE methods were correlated with results from the SOE method; very similar data was obtained by the SOE and SPE methods, even at concentrations well below 1 microgram/L.« less

  11. Development and validation of a matrix solid-phase dispersion method to determine acrylamide in coffee and coffee substitutes.

    PubMed

    Soares, Cristina M Dias; Alves, Rita C; Casal, Susana; Oliveira, M Beatriz P P; Fernandes, José Oliveira

    2010-04-01

    The present study describes the development and validation of a new method based on a matrix solid-phase dispersion (MSPD) sample preparation procedure followed by GC-MS for determination of acrylamide levels in coffee (ground coffee and brewed coffee) and coffee substitute samples. Samples were dispersed in C(18) sorbent and the mixture was further packed into a preconditioned custom-made ISOLUTE bilayered SPE column (C(18)/Multimode; 1 g + 1 g). Acrylamide was subsequently eluted with water, and then derivatized with bromine and quantified by GC-MS in SIM mode. The MSPD/GC-MS method presented a LOD of 5 microg/kg and a LOQ of 10 microg/kg. Intra and interday precisions ranged from 2% to 4% and 4% to 10%, respectively. To evaluate the performance of the method, 11 samples of ground and brewed coffee and coffee substitutes were simultaneously analyzed by the developed method and also by a previously validated method based in a liquid-extraction (LE) procedure, and the results were compared showing a high correlation between them.

  12. Comparison of five techniques for the detection of Renibacterium salmoninarum in adult coho salmon.

    USGS Publications Warehouse

    Pascho, R.J.; Elliott, D.G.; Mallett, R.W.; Mulcahy, D.

    1987-01-01

    Samples of kidney, spleen, coelomic fluid, and blood from 56 sexually mature coho salmon Oncorhynchus kisutch were examined for infection by Renibacterium salmoninarum by five methods. The overall prevalence (all sample types combined) of R. salmoninarum in the fish was 100% by the enzyme-linked immunosorbent assay, 86% by the combined results of the direct fluorescent antibody and the direct filtration-fluorescent antibody techniques, 39% by culture, 11% by counterimmunoelectrophoresis, and 5% by agarose gel immunodiffusion. There was a significant positive correlation (P < 0.001) between the enzyme-linked immunosorbent assay absorbance levels and the counts by fluorescent antibody techniques for kidney, spleen, and coelomic fluid, and significant positive correlations (P < 0.001) in enzyme-linked immunosorbent assay absorbance levels for all four of the sample types.

  13. The MIXR sample or: how I learned to stop worrying and love multiwavelength catalogue cross-correlations

    NASA Astrophysics Data System (ADS)

    Mingo, Beatriz; Watson, Mike; Stewart, Gordon; Rosen, Simon; Blain, Andrew; Hardcastle, Martin; Mateos, Silvia; Carrera, Francisco; Ruiz, Angel; Pineau, Francois-Xavier

    2016-08-01

    We cross-match 3XMM, WISE and FIRST/NVSS to create the largest-to-date mid-IR, X-ray, and radio (MIXR) sample of galaxies and AGN. We use MIXR to triage sources and efficiently and accurately pre-classify them as star-forming galaxies or AGN, and to highlight bias and shortcomings in current AGN sample selection methods, paving the way for the next generation of instruments. Our results highlight key questions in AGN science, such as the need for a re-definition of the radio-loud/radio-quiet classification, and our observed lack of correlation between the kinetic (jet) and radiative (luminosity) output in AGN, which has dramatic potential consequences on our current understanding of AGN accretion, variability and feedback.

  14. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    PubMed

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  15. A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases

    PubMed Central

    Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357

  16. A novel method to handle the effect of uneven sampling effort in biodiversity databases.

    PubMed

    Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.

  17. Evaluation of potentially nonlethal sampling methods for monitoring mercury concentrations in smallmouth bass (Micropterus dolomieu)

    USGS Publications Warehouse

    Schmitt, C.J.; Brumbaugh, W.G.

    2007-01-01

    We evaluated three potentially nonlethal alternatives to fillet sampling for the determination of mercury (Hg) concentrations in smallmouth bass (Micropterus dolomieu). Fish (n = 62, 226-464 mm total length) from six sites in southern Missouri were captured by electrofishing. Blood samples (1 mL) from each fish were obtained by caudal veinipuncture with a heparinized needle and syringe. Biopsy needle (10 mm x 14 gauge; three cuts per fish; 10-20 mg total dry weight) and biopsy punch (7 mm x 5 mm in diameter, one plug per fish, 30-50 mg dry weight) samples were obtained from the area beneath the dorsal fin. Fillet samples were obtained from the opposite side of the fish. All samples were freeze-dried and analyzed for total Hg by combustion amalgamation atomic absorption spectrophotometry. Mean relative standard deviations (RSDs) of triplicate samples were similar for all four methods (2.2-2.4%), but the range of RSDs was greater for blood (0.4-5.5%) than for the muscle methods (1.8-4.0%). Total Hg concentrations in muscle were 0.0200-0.8809 ??g/g wet weight; concentrations in plug, needle, and fillet samples from each fish were nearly identical. Blood Hg concentrations were 0.0006-0.0812 ??g/mL and were highly correlated with muscle concentrations; linear regressions between log-transformed blood and fillet Hg concentrations were linear and statistically significant (p < 0.01), and explained 91-93% of the total variation. Correlations between fillet Hg concentrations and fish size and age were weak; together they explained ???37% of the total variation, and the relations differed among sites. Overall, any of the alternative methods could provide satisfactory estimates of fillet Hg in smallmouth bass; however, both blood and plug sampling with disposable instruments were easier to perform than needle sampling. The biopsy needle was the most difficult to use, especially on smaller fish, and its relative expense necessitates reuse and, consequently, thorough cleaning between fish to prevent cross-contamination. ?? 2007 Springer Science+Business Media, LLC.

  18. Subaperture correlation based digital adaptive optics for full field optical coherence tomography.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2013-05-06

    This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

  19. Tracing footprints of environmental events in tree ring chemistry using neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Sahin, Dagistan

    The aim of this study is to identify environmental effects on tree-ring chemistry. It is known that industrial pollution, volcanic eruptions, dust storms, acid rain and similar events can cause substantial changes in soil chemistry. Establishing whether a particular group of trees is sensitive to these changes in soil environment and registers them in the elemental chemistry of contemporary growth rings is the over-riding goal of any Dendrochemistry research. In this study, elemental concentrations were measured in tree-ring samples of absolutely dated eleven modern forest trees, grown in the Mediterranean region, Turkey, collected and dated by the Malcolm and Carolyn Wiener Laboratory for Aegean and Near Eastern Dendrochronology laboratory at Cornell University. Correlations between measured elemental concentrations in the tree-ring samples were analyzed using statistical tests to answer two questions. Does the current concentration of a particular element depend on any other element within the tree? And, are there any elements showing correlated abnormal concentration changes across the majority of the trees? Based on the detailed analysis results, the low mobility of sodium and bromine, positive correlations between calcium, zinc and manganese, positive correlations between trace elements lanthanum, samarium, antimony, and gold within tree-rings were recognized. Moreover, zinc, lanthanum, samarium and bromine showed strong, positive correlations among the trees and were identified as possible environmental signature elements. New Dendrochemistry information found in this study would be also useful in explaining tree physiology and elemental chemistry in Pinus nigra species grown in Turkey. Elemental concentrations in tree-ring samples were measured using Neutron Activation Analysis (NAA) at the Pennsylvania State University Radiation Science and Engineering Center (RSEC). Through this study, advanced methodologies for methodological, computational and experimental NAA were developed to ensure an acceptable accuracy and certainty in the elemental concentration measurements in tree-ring samples. Two independent analysis methods of NAA were used; the well known k-zero method and a novel method developed in this study, called the Multi-isotope Iterative Westcott (MIW) method. The MIW method uses reaction rate probabilities for a group of isotopes, which can be calculated by a neutronic simulation or measured by experimentation, and determines the representative values for the neutron flux and neutron flux characterization parameters based on Westcott convention. Elemental concentration calculations for standard reference material and tree-ring samples were then performed using the MIW and k-zero analysis methods of the NAA and the results were cross verified. In the computational part of this study, a detailed burnup coupled neutronic simulation was developed to analyze real-time neutronic changes in a TRIGA Mark III reactor core, in this study, the Penn State Breazeale Reactor (PSBR) core. To the best of the author`s knowledge, this is the first burnup coupled neutronic simulation with realistic time steps and full fuel temperature profile for a TRIGA reactor using Monte Carlo Utility for Reactor Evolutions (MURE) code and Monte Carlo Neutral-Particle Code (MCNP) coupling. High fidelity and flexibility in the simulation was aimed to replicate the real core operation through the day. This approach resulted in an enhanced accuracy in neutronic representation of the PSBR core with respect to previous neutronic simulation models for the PSBR core. An important contribution was made in the NAA experimentation practices employed in Dendrochemistry studies at the RSEC. Automated laboratory control and analysis software for NAA measurements in the RSEC Radionuclide Applications Laboratory was developed. Detailed laboratory procedures were written in this study comprising preparation, handling and measurements of tree-ring samples in the Radionuclide Applications Laboratory.

  20. Preventing probe induced topography correlated artifacts in Kelvin Probe Force Microscopy.

    PubMed

    Polak, Leo; Wijngaarden, Rinke J

    2016-12-01

    Kelvin Probe Force Microscopy (KPFM) on samples with rough surface topography can be hindered by topography correlated artifacts. We show that, with the proper experimental configuration and using homogeneously metal coated probes, we are able to obtain amplitude modulation (AM) KPFM results on a gold coated sample with rough topography that are free from such artifacts. By inducing tip inhomogeneity through contact with the sample, clear potential variations appear in the KPFM image, which correlate with the surface topography and, thus, are probe induced artifacts. We find that switching to frequency modulation (FM) KPFM with such altered probes does not remove these artifacts. We also find that the induced tip inhomogeneity causes a lift height dependence of the KPFM measurement, which can therefore be used as a check for the presence of probe induced topography correlated artifacts. We attribute the observed effects to a work function difference between the tip and the rest of the probe and describe a model for such inhomogeneous probes that predicts lift height dependence and topography correlated artifacts for both AM and FM-KPFM methods. This work demonstrates that using a probe with a homogeneous work function and preventing tip changes is essential for KPFM on non-flat samples. From the three investigated probe coatings, PtIr, Au and TiN, the latter appears to be the most suitable, because of its better resistance against coating damage. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A portable x-ray fluorescence instrument for analyzing dust wipe samples for lead: evaluation with field samples.

    PubMed

    Sterling, D A; Lewis, R D; Luke, D A; Shadel, B N

    2000-06-01

    Dust wipe samples collected in the field were tested by nondestructive X-ray fluorescence (XRF) followed by laboratory analysis with flame atomic absorption spectrophotometry (FAAS). Data were analyzed for precision and accuracy of measurement. Replicate samples with the XRF show high precision with an intraclass correlation coefficient (ICC) of 0.97 (P<0.0001) and an overall coefficient of variation of 11.6%. Paired comparison indicates no statistical difference (P=0.272) between XRF and FAAS analysis. Paired samples are highly correlated with an R(2) ranging between 0.89 for samples that contain paint chips and 0.93 for samples that do not contain paint chips. The ICC for absolute agreement between XRF and laboratory results was 0.95 (P<0.0001). The relative error over the concentration range of 25 to 14,200 microgram Pb is -12% (95% CI, -18 to -5). The XRF appears to be an excellent method for rapid on-site evaluation of dust wipes for clearance and risk assessment purposes, although there are indications of some confounding when paint chips are present. Copyright 2000 Academic Press.

  2. Comparison of milk culture, direct and nested polymerase chain reaction (PCR) with fecal culture based on samples from dairy herds infected with Mycobacterium avium subsp. paratuberculosis

    PubMed Central

    Gao, Anli; Odumeru, Joseph; Raymond, Melinda; Hendrick, Steven; Duffield, Todd; Mutharia, Lucy

    2009-01-01

    Mycobacterium avium subsp. paratuberculosis (MAP) is the etiologic agent of Johne’s disease in cattle and other farm ruminants, and is also a suspected pathogen of Crohn’s disease in humans. Development of diagnostic methods for MAP infection has been a challenge over the last few decades. The objective of this study was to investigate the relationship between different methods for detection of MAP in milk and fecal samples. A total of 134 milk samples and 110 feces samples were collected from 146 individual cows in 14 MAP-infected herds in southwestern Ontario. Culture, IS900 polymerase chain reaction (PCR) and nested PCR methods were used for detecting MAP in milk; results were compared with those of fecal culture. A significant relationship was found between milk culture, direct PCR, and nested PCR (P < 0.05). The fecal culture results were not related to any of the 3 assay methods used for the milk samples (P > 0.10). Although fecal culture showed a higher sensitivity than the milk culture method, the difference was not significant (P = 0.2473). The number of MAP colony-forming units (CFU) isolated by culture from fecal samples was, on average, higher than that isolated from milk samples (P = 0.0083). There was no significant correlation between the number of CFU cultured from milk and from feces (Pearson correlation coefficient = 0.1957, N = 63, P = 0.1243). The animals with high numbers of CFU in milk culture may not be detected by fecal culture at all, and vise versa. A significant proportion (29% to 41%) of the positive animals would be missed if only 1 culture method, instead of both milk and feces, were to be used for diagnosis. This suggests that the shedding of MAP in feces and milk is not synchronized. Most of the infected cows were low-level shedders. The proportion of low-level shedders may even be underestimated because MAP is killed during decontamination, thus reducing the chance of detection. Therefore, to identify suspected Johne’s-infected animals using the tests in this study, both milk and feces samples should be collected in duplicate to enhance the diagnostic rate. The high MAP kill rate identified in the culture methods during decontamination may be compensated for by using the nested PCR method, which had a higher sensitivity than the IS900 PCR method used. PMID:19337397

  3. High-performance liquid chromatographic separation of human haemoglobins. Simultaneous quantitation of foetal and glycated haemoglobins.

    PubMed

    Bisse, E; Wieland, H

    1988-12-29

    A high-performance liquid chromatographic system, which uses a weak cation exchanger (PolyCATA) together with Bis-Tris buffer (pH 6.47-7.0) and sodium acetate gradients, is described. Samples from adults and newborns were analysed and a clean separation of many minor and major normal and abnormal haemoglobin (Hb) variants was greatly improved. The method allows the separation of minor foetal haemoglobin (HbF) variants and the simultaneous quantitation of HbF and glycated HbA. HbF values correlated well with those obtained by the alkali denaturation method (r = 0.997). The glycated haemoglobin (HbAIc) levels measured in patients with high HbF concentrations correlated with the total glycated haemoglobin determined by bioaffinity chromatography (r = 0.973). The procedure is useful for diagnostic applications and affords an effective and sensitive way of examining blood samples for haemoglobin abnormalities.

  4. Analytical performances of the Diazyme ADA assay on the Cobas® 6000 system.

    PubMed

    Delacour, Hervé; Sauvanet, Christophe; Ceppa, Franck; Burnat, Pascal

    2010-12-01

    To evaluate the analytical performance of the Diazyme ADA assay on the Cobas® 6000 system for pleural fluid samples analysis. Imprecision, linearity, calibration curve stability, interference, and correlation studies were completed. The Diazyme ADA assay demonstrated excellent precision (CV<4%) over the analytical measurement range (0.5-117 U/L). Bilirubin above 50 μmol/L and haemoglobin above 177 μmol/L interfered with the test, inducing a negative and a positive interference respectively. The Diazyme ADA assay correlated well with the Giusti method (r(2)=0.93) but exhibited a negative bias (~ -30%). The Diazyme ADA assay on the Cobas® 6000 system represents a rapid, accurate, precise and reliable method for determination of ADA activity in pleural fluid samples. Copyright © 2010 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. Correlation Between Hot Spots and 3-d Defect Structure in Single and Polycrystalline High-explosive Materials

    NASA Astrophysics Data System (ADS)

    Hawkins, Cameron; Tschuaner, Oliver; Fussell, Zachary; Smith, Jesse

    2017-06-01

    A novel approach that spatially identifies inhomogeneities from microscale (defects, con-formational disorder) to mesoscale (voids, inclusions) is developed using synchrotron x-ray methods: tomography, Lang topography, and micro-diffraction mapping. These techniques pro-vide a non-destructive method for characterization of mm-sized samples prior to shock experiments. These characterization maps can be used to correlate continuum level measurements in shock compression experiments to the mesoscale and microscale structure. Specifically examined is a sample of C4. We show extensive conformational disorder in gamma-RDX, which is the main component. Further, we observe that the minor HMX-component in C4 contains at least two different phases: alpha- and beta-HMX. This work supported by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy and by the Site-Directed Research and Development Program. DOE/NV/25946-3071.

  6. Modified cross sample entropy and surrogate data analysis method for financial time series

    NASA Astrophysics Data System (ADS)

    Yin, Yi; Shang, Pengjian

    2015-09-01

    For researching multiscale behaviors from the angle of entropy, we propose a modified cross sample entropy (MCSE) and combine surrogate data analysis with it in order to compute entropy differences between original dynamics and surrogate series (MCSDiff). MCSDiff is applied to simulated signals to show accuracy and then employed to US and Chinese stock markets. We illustrate the presence of multiscale behavior in the MCSDiff results and reveal that there are synchrony containing in the original financial time series and they have some intrinsic relations, which are destroyed by surrogate data analysis. Furthermore, the multifractal behaviors of cross-correlations between these financial time series are investigated by multifractal detrended cross-correlation analysis (MF-DCCA) method, since multifractal analysis is a multiscale analysis. We explore the multifractal properties of cross-correlation between these US and Chinese markets and show the distinctiveness of NQCI and HSI among the markets in their own region. It can be concluded that the weaker cross-correlation between US markets gives the evidence for the better inner mechanism in the US stock markets than that of Chinese stock markets. To study the multiscale features and properties of financial time series can provide valuable information for understanding the inner mechanism of financial markets.

  7. The relationship between job stress and burnout levels of oncology nurses

    PubMed Central

    Tuna, Rujnan; Baykal, Ülkü

    2014-01-01

    Objective: Job stress and burnout levels of oncology nurses increase day-by-day in connection with rapidly increasing cancer cases worldwide as well as in Turkey. The purpose of this study was to establish job stress and burnout levels of oncology nurses and the relationship in between. Methods: The sample of this descriptive study comprised of 189 nurses that are selected by nonprobability sampling method, employed by 11 hospitals in Istanbul. Survey form of 20 questions, Job Stressors Scale and Maslach Burnout Inventory (MBI) were used during collection of data. Data were evaluated using percentage, Kruskal–Wallis, Mann–Whitney U and Spearman correlation analyses. Results: In the study, there was a positively weak correlation between “Work Role Ambiguity” subdimension of Job Stressors Scale and “Emotional Exhaustion” and “Personal Accomplishment” subdimensions, whereas a positively weak and medium correlation was encountered between “Work Role Conflict” subdimension and “Emotional Exhaustion” and “Depersonalization” subdimensions. A negatively weak correlation was found between “Work Role Overload” subdimension and “Emotional Exhaustion” and “Depersonalization” subdimensions. Conclusion: A significant relationship was established between subdimensions of job stress level and of burnout level, that a lot of oncology nurses who have participated in the study wanted to change their units, because of the high attrition rate. PMID:27981080

  8. Examining spectral properties of Landsat 8 OLI for predicting above-ground carbon of Labanan Forest, Berau

    NASA Astrophysics Data System (ADS)

    Suhardiman, A.; Tampubolon, B. A.; Sumaryono, M.

    2018-04-01

    Many studies revealed significant correlation between satellite image properties and forest data attributes such as stand volume, biomass or carbon stock. However, further study is still relevant due to advancement of remote sensing technology as well as improvement on methods of data analysis. In this study, the properties of three vegetation indices derived from Landsat 8 OLI were tested upon above-ground carbon stock data from 50 circular sample plots (30-meter radius) from ground survey in PT. Inhutani I forest concession in Labanan, Berau, East Kalimantan. Correlation analysis using Pearson method exhibited a promising results when the coefficient of correlation (r-value) was higher than 0.5. Further regression analysis was carried out to develop mathematical model describing the correlation between sample plots data and vegetation index image using various mathematical models.Power and exponential model were demonstrated a good result for all vegetation indices. In order to choose the most adequate mathematical model for predicting Above-ground Carbon (AGC), the Bayesian Information Criterion (BIC) was applied. The lowest BIC value (i.e. -376.41) shown by Transformed Vegetation Index (TVI) indicates this formula, AGC = 9.608*TVI21.54, is the best predictor of AGC of study area.

  9. Flow-injection chemiluminescence determination of melamine in urine and plasma.

    PubMed

    Tang, Xiaoshuang; Shi, Xiyan; Tang, Yuhai; Yue, Zhongjin; He, Qiqi

    2012-01-01

    A novel flow-injection chemiluminescence method for the determination of melamine in urine and plasma was developed. It was found that melamine can remarkably enhance chemiluminescence emission from the luminol-K(3) Fe(CN)(6) system in an alkaline medium. Under the optimum conditions, chemiluminescence intensity had a good linear relationship with the concentration of melamine in the range 9.0 × 10(-9) -7.0 × 10(-6) g/mL, with a correlation coefficient of 0.9992. The detection limit (3σ) was 3.5 ng/mL. The method has been applied to determine the concentration of melamine in samples using liquid-liquid extraction. Average recoveries of melamine were 102.6% in urine samples and 95.1% in plasma samples. The method provided a reproducible and stable approach for the sensitive detection of melamine in urine and plasma samples. Copyright © 2011 John Wiley & Sons, Ltd.

  10. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  11. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

    2013-09-01

    In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

  12. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  13. Improving the Accuracy of Extracting Surface Water Quality Levels (SWQLs) Using Remote Sensing and Artificial Neural Network: a Case Study in the Saint John River, Canada

    NASA Astrophysics Data System (ADS)

    Sammartano, G.; Spanò, A.

    2017-09-01

    Delineating accurate surface water quality levels (SWQLs) always presents a great challenge to researchers. Existing methods of assessing surface water quality only provide individual concentrations of monitoring stations without providing the overall SWQLs. Therefore, the results of existing methods are usually difficult to be understood by decision-makers. Conversely, the water quality index (WQI) can simplify surface water quality assessment process to be accessible to decision-makers. However, in most cases, the WQI reflects inaccurate SWQLs due to the lack of representative water samples. It is very challenging to provide representative water samples because this process is costly and time consuming. To solve this problem, we introduce a cost-effective method which combines the Landsat-8 imagery and artificial intelligence to develop models to derive representative water samples by correlating concentrations of ground truth water samples to satellite spectral information. Our method was validated and the correlation between concentrations of ground truth water samples and predicted concentrations from the developed models reached a high level of coefficient of determination (R2) > 0.80, which is trustworthy. Afterwards, the predicted concentrations over each pixel of the study area were used as an input to the WQI developed by the Canadian Council of Ministers of the Environment to extract accurate SWQLs, for drinking purposes, in the Saint John River. The results indicated that SWQL was observed as 67 (Fair) and 59 (Marginal) for the lower and middle basins of the river, respectively. These findings demonstrate the potential of using our approach in surface water quality management.

  14. Determination of efavirenz in human dried blood spots by reversed-phase high-performance liquid chromatography with UV detection.

    PubMed

    Hoffman, Justin T; Rossi, Steven S; Espina-Quinto, Rowena; Letendre, Scott; Capparelli, Edmund V

    2013-04-01

    Previously published methods for determination of efavirenz (EFV) in human dried blood spots (DBS) use costly and complex liquid chromatography/mass spectrometry. We describe the validation and evaluation of a simple and inexpensive high-performance liquid chromatography method for EFV quantification in human DBS and dried plasma spots (DPS), using ultraviolet detection appropriate for resource-limited settings. One hundred microliters of heparinized whole blood or plasma were spotted onto blood collection cards, dried, punched, and eluted. Eluates are injected onto a C-18 reversed phase high-performance liquid chromatography column. EFV is separated isocratically using a potassium phosphate and acetonitrile mobile phase. Ultraviolet detection is at 245 nm. Quantitation is by use of external calibration standards. Following validation, the method was evaluated using whole blood and plasma from HIV-positive patients undergoing EFV therapy. Mean recovery of drug from DBS is 91.5%. The method is linear over the validated concentration range of 0.3125-20.0 μg/mL. A good correlation (Spearman r = 0.96) between paired plasma and DBS EFV concentrations from the clinical samples was observed, and hematocrit level was not found to be a significant determinant of the EFV DBS level. The mean observed C DBS/C plasma ratio was 0.68. A good correlation (Spearman r = 0.96) between paired plasma and DPS EFV concentrations from the clinical samples was observed. The mean percent deviation of DPS samples from plasma samples is 1.68%. Dried whole blood spot or dried plasma spot sampling is well suited for monitoring EFV therapy in resource-limited settings, particularly when high sensitivity is not essential.

  15. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  16. Laser-induced differential normalized fluorescence method for cancer diagnosis

    DOEpatents

    Vo-Dinh, Tuan; Panjehpour, Masoud; Overholt, Bergein F.

    1996-01-01

    An apparatus and method for cancer diagnosis are disclosed. The diagnostic method includes the steps of irradiating a tissue sample with monochromatic excitation light, producing a laser-induced fluorescence spectrum from emission radiation generated by interaction of the excitation light with the tissue sample, and dividing the intensity at each wavelength of the laser-induced fluorescence spectrum by the integrated area under the laser-induced fluorescence spectrum to produce a normalized spectrum. A mathematical difference between the normalized spectrum and an average value of a reference set of normalized spectra which correspond to normal tissues is calculated, which provides for amplifying small changes in weak signals from malignant tissues for improved analysis. The calculated differential normalized spectrum is correlated to a specific condition of a tissue sample.

  17. Laser-induced differential normalized fluorescence method for cancer diagnosis

    DOEpatents

    Vo-Dinh, T.; Panjehpour, M.; Overholt, B.F.

    1996-12-03

    An apparatus and method for cancer diagnosis are disclosed. The diagnostic method includes the steps of irradiating a tissue sample with monochromatic excitation light, producing a laser-induced fluorescence spectrum from emission radiation generated by interaction of the excitation light with the tissue sample, and dividing the intensity at each wavelength of the laser-induced fluorescence spectrum by the integrated area under the laser-induced fluorescence spectrum to produce a normalized spectrum. A mathematical difference between the normalized spectrum and an average value of a reference set of normalized spectra which correspond to normal tissues is calculated, which provides for amplifying small changes in weak signals from malignant tissues for improved analysis. The calculated differential normalized spectrum is correlated to a specific condition of a tissue sample. 5 figs.

  18. Determination of alcohol and extract concentration in beer samples using a combined method of near-infrared (NIR) spectroscopy and refractometry.

    PubMed

    Castritius, Stefan; Kron, Alexander; Schäfer, Thomas; Rädle, Matthias; Harms, Diedrich

    2010-12-22

    A new approach of combination of near-infrared (NIR) spectroscopy and refractometry was developed in this work to determine the concentration of alcohol and real extract in various beer samples. A partial least-squares (PLS) regression, as multivariate calibration method, was used to evaluate the correlation between the data of spectroscopy/refractometry and alcohol/extract concentration. This multivariate combination of spectroscopy and refractometry enhanced the precision in the determination of alcohol, compared to single spectroscopy measurements, due to the effect of high extract concentration on the spectral data, especially of nonalcoholic beer samples. For NIR calibration, two mathematical pretreatments (first-order derivation and linear baseline correction) were applied to eliminate light scattering effects. A sample grouping of the refractometry data was also applied to increase the accuracy of the determined concentration. The root mean squared errors of validation (RMSEV) of the validation process concerning alcohol and extract concentration were 0.23 Mas% (method A), 0.12 Mas% (method B), and 0.19 Mas% (method C) and 0.11 Mas% (method A), 0.11 Mas% (method B), and 0.11 Mas% (method C), respectively.

  19. Measurement of plasma unbound unconjugated bilirubin.

    PubMed

    Ahlfors, C E

    2000-03-15

    A method is described for measuring the unconjugated fraction of the unbound bilirubin concentration in plasma by combining the peroxidase method for determining unbound bilirubin with a diazo method for measuring conjugated and unconjugated bilirubin. The accuracy of the unbound bilirubin determination is improved by decreasing sample dilution, eliminating interference by conjugated bilirubin, monitoring changes in bilirubin concentration using diazo derivatives, and correcting for rate-limiting dissociation of bilirubin from albumin. The unbound unconjugated bilirubin concentration by the combined method in plasma from 20 jaundiced newborns was significantly greater than and poorly correlated with the unbound bilirubin determined by the existing peroxidase method (r = 0.7), possibly due to differences in sample dilution between the methods. The unbound unconjugated bilirubin was an unpredictable fraction of the unbound bilirubin in plasma samples from patients with similar total bilirubin concentrations but varying levels of conjugated bilirubin. A bilirubin-binding competitor was readily detected at a sample dilution typically used for the combined test but not at the dilution used for the existing peroxidase method. The combined method is ideally suited to measuring unbound unconjugated bilirubin in jaundiced human newborns or animal models of kernicterus. Copyright 2000 Academic Press.

  20. Assessment of Nonverbal and Verbal Apraxia in Patients with Parkinson's Disease

    PubMed Central

    Olchik, Maira Rozenfeld; Shumacher Shuh, Artur Francisco; Rieder, Carlos R. M.

    2015-01-01

    Objective. To assess the presence of nonverbal and verbal apraxia in patients with Parkinson's disease (PD) and analyze the correlation between these conditions and patient age, education, duration of disease, and PD stage, as well as evaluate the correlation between the two types of apraxia and the frequency and types of verbal apraxic errors made by patients in the sample. Method. This was an observational prevalence study. The sample comprised 45 patients with PD seen at the Movement Disorders Clinic of the Clinical Hospital of Porto Alegre, Brazil. Patients were evaluated using the Speech Apraxia Assessment Protocol and PD stages were classified according to the Hoehn and Yahr scale. Results. The rate of nonverbal apraxia and verbal apraxia in the present sample was 24.4%. Verbal apraxia was significantly correlated with education (p ≤ 0.05). The most frequent types of verbal apraxic errors were omissions (70.8%). The analysis of manner and place of articulation showed that most errors occurred during the production of trill (57.7%) and dentoalveolar (92%) phonemes, consecutively. Conclusion. Patients with PD presented nonverbal and verbal apraxia and made several verbal apraxic errors. Verbal apraxia was correlated with education levels. PMID:26543663

Top