Sample records for set variation analysis

  1. Multiscale 3D Shape Analysis using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2013-01-01

    Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data. PMID:16685992

  2. Multiscale 3D shape analysis using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen R

    2005-01-01

    Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data.

  3. Sensitivity analysis for parametric generalized implicit quasi-variational-like inclusions involving P-[eta]-accretive mappings

    NASA Astrophysics Data System (ADS)

    Kazmi, K. R.; Khan, F. A.

    2008-01-01

    In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].

  4. Landsat analysis of tropical forest succession employing a terrain model

    NASA Technical Reports Server (NTRS)

    Barringer, T. H.; Robinson, V. B.; Coiner, J. C.; Bruce, R. C.

    1980-01-01

    Landsat multispectral scanner (MSS) data have yielded a dual classification of rain forest and shadow in an analysis of a semi-deciduous forest on Mindonoro Island, Philippines. Both a spatial terrain model, using a fifth side polynomial trend surface analysis for quantitatively estimating the general spatial variation in the data set, and a spectral terrain model, based on the MSS data, have been set up. A discriminant analysis, using both sets of data, has suggested that shadowing effects may be due primarily to local variations in the spectral regions and can therefore be compensated for through the decomposition of the spatial variation in both elevation and MSS data.

  5. Does a strategy to promote shared decision-making reduce medical practice variation in the choice of either single or double embryo transfer after in vitro fertilisation? A secondary analysis of a randomised controlled trial.

    PubMed

    Brabers, Anne E M; van Dijk, Liset; Groenewegen, Peter P; van Peperstraten, Arno M; de Jong, Judith D

    2016-05-06

    The hypothesis that shared decision-making (SDM) reduces medical practice variations is increasingly common, but no evidence is available. We aimed to elaborate further on this, and to perform a first exploratory analysis to examine this hypothesis. This analysis, based on a limited data set, examined how SDM is associated with variation in the choice of single embryo transfer (SET) or double embryo transfer (DET) after in vitro fertilisation (IVF). We examined variation between and within hospitals. A secondary analysis of a randomised controlled trial. 5 hospitals in the Netherlands. 222 couples (woman aged <40 years) on a waiting list for a first IVF cycle, who could choose between SET and DET (ie, ≥2 embryos available). SDM via a multifaceted strategy aimed to empower couples in deciding how many embryos should be transferred. The strategy consisted of decision aid, support of IVF nurse and the offer of reimbursement for an extra treatment cycle. Control group received standard IVF care. Difference in variation due to SDM in the choice of SET or DET, both between and within hospitals. There was large variation in the choice of SET or DET between hospitals in the control group. Lower variation between hospitals was observed in the group with SDM. Within most hospitals, variation in the choice of SET or DET appeared to increase due to SDM. Variation particularly increased in hospitals where mainly DET was chosen in the control group. Although based on a limited data set, our study gives a first insight that including patients' preferences through SDM results in less variation between hospitals, and indicates another pattern of variation within hospitals. Variation that results from patient preferences could be potentially named the informed patient rate. Our results provide the starting point for further research. NCT00315029; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Introductions in Research Articles: Variation across Disciplines.

    ERIC Educational Resources Information Center

    Samraj, B.

    2002-01-01

    Reports on an analysis of research article introductions from two related fields, Wildlife Behavior and Conservation Biology, using Swales' (1990), "Genre Analysis. English in Academic and Research Settings." Results of the analysis reveal disciplinary variation in the structure of this genre, which has important pedagogical implications.…

  7. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1991-01-01

    New methods were developed for efficient aeroservoelastic analysis and optimization. The main target was to develop a method for investigating large structural variations using a single set of modal coordinates. This task was accomplished by basing the structural modal coordinates on normal modes calculated with a set of fictitious masses loading the locations of anticipated structural changes. The following subject areas are covered: (1) modal coordinates for aeroelastic analysis with large local structural variations; and (2) time simulation of flutter with large stiffness changes.

  8. Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr

    2013-02-15

    The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.

  9. A survey of tools for variant analysis of next-generation genome sequencing data

    PubMed Central

    Pabinger, Stephan; Dander, Andreas; Fischer, Maria; Snajder, Rene; Sperk, Michael; Efremova, Mirjana; Krabichler, Birgit; Speicher, Michael R.; Zschocke, Johannes

    2014-01-01

    Recent advances in genome sequencing technologies provide unprecedented opportunities to characterize individual genomic landscapes and identify mutations relevant for diagnosis and therapy. Specifically, whole-exome sequencing using next-generation sequencing (NGS) technologies is gaining popularity in the human genetics community due to the moderate costs, manageable data amounts and straightforward interpretation of analysis results. While whole-exome and, in the near future, whole-genome sequencing are becoming commodities, data analysis still poses significant challenges and led to the development of a plethora of tools supporting specific parts of the analysis workflow or providing a complete solution. Here, we surveyed 205 tools for whole-genome/whole-exome sequencing data analysis supporting five distinct analytical steps: quality assessment, alignment, variant identification, variant annotation and visualization. We report an overview of the functionality, features and specific requirements of the individual tools. We then selected 32 programs for variant identification, variant annotation and visualization, which were subjected to hands-on evaluation using four data sets: one set of exome data from two patients with a rare disease for testing identification of germline mutations, two cancer data sets for testing variant callers for somatic mutations, copy number variations and structural variations, and one semi-synthetic data set for testing identification of copy number variations. Our comprehensive survey and evaluation of NGS tools provides a valuable guideline for human geneticists working on Mendelian disorders, complex diseases and cancers. PMID:23341494

  10. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  11. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    PubMed

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  12. SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dankwa, A; Castillo, E; Guerrero, T

    Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeatedmore » on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altman analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.« less

  13. Investigation of priorities in water quality management based on correlations and variations.

    PubMed

    Boyacıoğlu, Hülya; Gündogdu, Vildan; Boyacıoğlu, Hayal

    2013-04-15

    The development of water quality assessment strategies investigating spatial and temporal changes caused by natural and anthropogenic phenomena is an important tool in management practices. This paper used cluster analysis, water quality index method, sensitivity analysis and canonical correlation analysis to investigate priorities in pollution control activities. Data sets representing 22 surface water quality parameters were subject to analysis. Results revealed that organic pollution was serious threat for overall water quality in the region. Besides, oil and grease, lead and mercury were the critical variables violating the standard. In contrast to inorganic variables, organic and physical-inorganic chemical parameters were influenced by variations in physical conditions (discharge, temperature). This study showed that information produced based on the variations and correlations in water quality data sets can be helpful to investigate priorities in water management activities. Moreover statistical techniques and index methods are useful tools in data - information transformation process. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.

    PubMed

    Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B

    2013-03-01

    Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.

  15. Fine mapping on chromosome 13q32-34 and brain expression analysis implicates MYO16 in schizophrenia.

    PubMed

    Rodriguez-Murillo, Laura; Xu, Bin; Roos, J Louw; Abecasis, Gonçalo R; Gogos, Joseph A; Karayiorgou, Maria

    2014-03-01

    We previously reported linkage of schizophrenia and schizoaffective disorder to 13q32-34 in the European descent Afrikaner population from South Africa. The nature of genetic variation underlying linkage peaks in psychiatric disorders remains largely unknown and both rare and common variants may be contributing. Here, we examine the contribution of common variants located under the 13q32-34 linkage region. We used densely spaced SNPs to fine map the linkage peak region using both a discovery sample of 415 families and a meta-analysis incorporating two additional replication family samples. In a second phase of the study, we use one family-based data set with 237 families and independent case-control data sets for fine mapping of the common variant association signal using HapMap SNPs. We report a significant association with a genetic variant (rs9583277) within the gene encoding for the myosin heavy-chain Myr 8 (MYO16), which has been implicated in neuronal phosphoinositide 3-kinase signaling. Follow-up analysis of HapMap variation within MYO16 in a second set of Afrikaner families and additional case-control data sets of European descent highlighted a region across introns 2-6 as the most likely region to harbor common MYO16 risk variants. Expression analysis revealed a significant increase in the level of MYO16 expression in the brains of schizophrenia patients. Our results suggest that common variation within MYO16 may contribute to the genetic liability to schizophrenia.

  16. Recent variations in seasonality of temperature and precipitation in Canada, 1976-95

    NASA Astrophysics Data System (ADS)

    Whitfield, Paul H.; Bodtker, Karin; Cannon, Alex J.

    2002-11-01

    A previously reported analysis of rehabilitated monthly temperature and precipitation time series for several hundred stations across Canada showed generally spatially coherent patterns of variation between two decades (1976-85 and 1986-95). The present work expands that analysis to finer time scales and a greater number of stations. We demonstrate how the finer temporal resolution, at 5 day or 11 day intervals, increases the separation between clusters of recent variations in seasonal patterns of temperature and precipitation. We also expand the analysis by increasing the number of stations from only rehabilitated monthly data sets to rehabilitated daily sets, then to approximately 1500 daily observation stations. This increases the spatial density of data and allows a finer spatial resolution of patterns between the two decades. We also examine the success of clustering partial records, i.e. sites where the data record is incomplete. The intent of this study was to be consistent with previous work and explore how greater temporal and spatial detail in the climate data affects the resolution of patterns of recent climate variations. The variations we report for temperature and precipitation are taking place at different temporal and spatial scales. Further, the spatial patterns are much broader than local climate regions and ecozones, indicating that the differences observed may be the result of variations in atmospheric circulation.

  17. High-speed peak matching algorithm for retention time alignment of gas chromatographic data for chemometric analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Kevin J.; Wright, Bob W.; Jarman, Kristin H.

    2003-05-09

    A rapid retention time alignment algorithm was developed as a preprocessing utility to be used prior to chemometric analysis of large datasets of diesel fuel gas chromatographic profiles. Retention time variation from chromatogram-to-chromatogram has been a significant impediment against the use of chemometric techniques in the analysis of chromatographic data due to the inability of current multivariate techniques to correctly model information that shifts from variable to variable within a dataset. The algorithm developed is shown to increase the efficacy of pattern recognition methods applied to a set of diesel fuel chromatograms by retaining chemical selectivity while reducing chromatogram-to-chromatogram retentionmore » time variations and to do so on a time scale that makes analysis of large sets of chromatographic data practical.« less

  18. Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.

    PubMed

    Clopper, Cynthia G

    2017-07-01

    Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.

  19. Definition of periprosthetic joint infection: is there a consensus?

    PubMed

    Parvizi, Javad; Jacovides, Christina; Zmistowski, Benjamin; Jung, Kwang Am

    2011-11-01

    The diagnosis of periprosthetic joint infection (PJI) continues to pose a challenge. While many diagnostic criteria have been proposed, a gold standard for diagnosis is lacking. Use of multiple diagnostic criteria within the joint arthroplasty community raises concerns in patient treatment and comparison of research pertaining to PJI. We (1) determined the variation in existing diagnostic criteria, (2) compared the existing criteria to a proposed new set of criteria that incorporates aspirate cell count analysis, and (3) investigated the variations between the existing criteria and the proposed criteria. We retrospectively identified 182 patients undergoing 192 revision knee arthroplasties who had a preoperative joint aspiration analysis at our institution between April 2002 and November 2009. We excluded 20 cases due to insufficient laboratory parameters, leaving 172 cases for analysis. We applied six previously published sets of diagnostic criteria for PJI to determine the variation in its incidence using each set of criteria. We then compared these diagnostic criteria to our proposed new criteria and investigated cases where disagreement occurred. We identified 41 cases (24%) in which at least one established criteria set classified the case as infected while at least one other criteria set classified the case as uninfected. With our proposed criteria, the infected/uninfected ratio was 92/80. The proposed criteria had a large variance in sensitivity (54%-100%), specificity (39%-100%), and accuracy (53%-100%) when using each of the established criteria sets as the reference standard. The discrepancy between definitions of infection complicates interpretation of the literature and the treatment of failed TKAs owing to PJI. Based on our findings, we suggest establishing a common set of diagnostic criteria utilizing aspirate analysis to improve the treatment of PJI and facilitate interpretation of the literature. Level III, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.

  20. When things go pear shaped: contour variations of contacts

    NASA Astrophysics Data System (ADS)

    Utzny, Clemens

    2013-04-01

    Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.

  1. Long memory in international financial markets trends and short movements during 2008 financial crisis based on variational mode decomposition and detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2015-11-01

    The purpose of this study is to investigate long-range dependence in trend and short variation of stock market price and return series before, during, and after 2008 financial crisis. Variational mode decomposition (VMD), a newly introduced technique for signal processing, is adopted to decompose stock market data into a finite set of modes so as to obtain long term trends and short term movements of stock market data. Then, the detrended fluctuation analysis (DFA) and range scale (R/S) analysis are used to estimate Hurst exponent in each variational mode obtained from VMD. For both price and return series, the empirical results from twelve international stock markets show evidence that long term trends are persistent, whilst short term variations are anti-persistent before, during, and after 2008 financial crisis.

  2. Statistics, Uncertainty, and Transmitted Variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  3. Fine Mapping on Chromosome 13q32–34 and Brain Expression Analysis Implicates MYO16 in Schizophrenia

    PubMed Central

    Rodriguez-Murillo, Laura; Xu, Bin; Roos, J Louw; Abecasis, Gonçalo R; Gogos, Joseph A; Karayiorgou, Maria

    2014-01-01

    We previously reported linkage of schizophrenia and schizoaffective disorder to 13q32–34 in the European descent Afrikaner population from South Africa. The nature of genetic variation underlying linkage peaks in psychiatric disorders remains largely unknown and both rare and common variants may be contributing. Here, we examine the contribution of common variants located under the 13q32–34 linkage region. We used densely spaced SNPs to fine map the linkage peak region using both a discovery sample of 415 families and a meta-analysis incorporating two additional replication family samples. In a second phase of the study, we use one family-based data set with 237 families and independent case–control data sets for fine mapping of the common variant association signal using HapMap SNPs. We report a significant association with a genetic variant (rs9583277) within the gene encoding for the myosin heavy-chain Myr 8 (MYO16), which has been implicated in neuronal phosphoinositide 3-kinase signaling. Follow-up analysis of HapMap variation within MYO16 in a second set of Afrikaner families and additional case–control data sets of European descent highlighted a region across introns 2–6 as the most likely region to harbor common MYO16 risk variants. Expression analysis revealed a significant increase in the level of MYO16 expression in the brains of schizophrenia patients. Our results suggest that common variation within MYO16 may contribute to the genetic liability to schizophrenia. PMID:24141571

  4. Population genetic variation in the tree fern Alsophila spinulosa (Cyatheaceae): effects of reproductive strategy.

    PubMed

    Wang, Ting; Su, Yingjuan; Li, Yuan

    2012-01-01

    Essentially all ferns can perform both sexual and asexual reproduction. Their populations represent suitable study objects to test the population genetic effects of different reproductive systems. Using the diploid homosporous fern Alsophila spinulosa as an example species, the main purpose of this study was to assess the relative impact of sexual and asexual reproduction on the level and structure of population genetic variation. Inter-simple sequence repeats analysis was conducted on 140 individuals collected from seven populations (HSG, LCH, BPC, MPG, GX, LD, and ZHG) in China. Seventy-four polymorphic bands discriminated a total of 127 multilocus genotypes. Character compatibility analysis revealed that 50.0 to 70.0% of the genotypes had to be deleted in order to obtain a tree-like structure in the data set from populations HSG, LCH, MPG, BPC, GX, and LD; and there was a gradual decrease of conflict in the data set when genotypes with the highest incompatibility counts were successively deleted. In contrast, in population ZHG, only 33.3% of genotypes had to be removed to achieve complete compatibility in the data set, which showed a sharp decline in incompatibility upon the deletion of those genotypes. All populations examined possessed similar levels of genetic variation. Population ZHG was not found to be more differentiated than the other populations. Sexual recombination is the predominant source of genetic variation in most of the examined populations of A. spinulosa. However, somatic mutation contributes most to the genetic variation in population ZHG. This change of the primary mode of reproduction does not cause a significant difference in the population genetic composition. Character compatibility analysis represents an effective approach to separate the role of sexual and asexual components in shaping the genetic pattern of fern populations.

  5. Effect of incubation temperature and time on the precision of data generated by antibiotic disc diffusion assays.

    PubMed

    Smith, P; Kronvall, G

    2015-07-01

    The influence on the precision of disc diffusion data of the conditions under which the tests were performed was examined by analysing multilaboratory data sets generated after incubation at 35 °C for 18 h, at 28 °C for 24 h and 22 °C for 24 h and 48 h. Analyses of these data sets demonstrated that precision was significantly and progressively decreased as the test temperature was reduced from 35 to 22 °C. Analysis of the data obtained at 22 °C also showed the precision was inversely related to the time of incubation. Temperature and time related decreases in precision were not related to differences in the mean zone sizes of the data sets obtained under these test conditions. Analysis of the zone data obtained at 28 and 22 °C as single laboratory sets demonstrated that reductions of incubation temperature resulted in significant increases in both intralaboratory and interlaboratory variation. Increases in incubation time at 22 °C were, however, associated with statistically significant increases in interlaboratory variation but not with any significant increase in intralaboratory variation. The significance of these observations for the establishment of the acceptable limits of precision of data sets that can be used for the setting of valid epidemiological cut-off values is discussed. © 2014 John Wiley & Sons Ltd.

  6. Parallel gene analysis with allele-specific padlock probes and tag microarrays

    PubMed Central

    Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats

    2003-01-01

    Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977

  7. Development of a global model for atmospheric backscatter at CO2 wavelengths

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Wang, P. H.; Farrukh, U.; Deepak, A.; Patterson, E. M.

    1986-01-01

    The variation of the aerosol backscattering at 10.6 micrometers within the free troposphere was investigated and a model to describe this variation was developed. The analysis combines theoretical modeling with the results contained within three independent data sets. The data sets used were obtained by the SAGE I/SAM II satellite experiments, the GAMETAG flight series, and by direct backscatter measurements. The theoretical work includes use of a bimodal, two component aerosol model, and the study of the microphysical and associated optical changes occurring within an aerosol plume. A consistent picture is obtained that describes the variation of the aerosol backscattering function in the free troposphere with altitude, latitude, and season.

  8. K-Fold Crossvalidation in Canonical Analysis.

    ERIC Educational Resources Information Center

    Liang, Kun-Hsia; And Others

    1995-01-01

    A computer-assisted, K-fold cross-validation technique is discussed in the framework of canonical correlation analysis of randomly generated data sets. Analysis results suggest that this technique can effectively reduce the contamination of canonical variates and canonical correlations by sample-specific variance components. (Author/SLD)

  9. Statistical analysis of activation and reaction energies with quasi-variational coupled-cluster theory

    NASA Astrophysics Data System (ADS)

    Black, Joshua A.; Knowles, Peter J.

    2018-06-01

    The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.

  10. Statistical analysis of aerosol species, trace gasses, and meteorology in Chicago.

    PubMed

    Binaku, Katrina; O'Brien, Timothy; Schmeling, Martina; Fosco, Tinamarie

    2013-09-01

    Both canonical correlation analysis (CCA) and principal component analysis (PCA) were applied to atmospheric aerosol and trace gas concentrations and meteorological data collected in Chicago during the summer months of 2002, 2003, and 2004. Concentrations of ammonium, calcium, nitrate, sulfate, and oxalate particulate matter, as well as, meteorological parameters temperature, wind speed, wind direction, and humidity were subjected to CCA and PCA. Ozone and nitrogen oxide mixing ratios were also included in the data set. The purpose of statistical analysis was to determine the extent of existing linear relationship(s), or lack thereof, between meteorological parameters and pollutant concentrations in addition to reducing dimensionality of the original data to determine sources of pollutants. In CCA, the first three canonical variate pairs derived were statistically significant at the 0.05 level. Canonical correlation between the first canonical variate pair was 0.821, while correlations of the second and third canonical variate pairs were 0.562 and 0.461, respectively. The first canonical variate pair indicated that increasing temperatures resulted in high ozone mixing ratios, while the second canonical variate pair showed wind speed and humidity's influence on local ammonium concentrations. No new information was uncovered in the third variate pair. Canonical loadings were also interpreted for information regarding relationships between data sets. Four principal components (PCs), expressing 77.0 % of original data variance, were derived in PCA. Interpretation of PCs suggested significant production and/or transport of secondary aerosols in the region (PC1). Furthermore, photochemical production of ozone and wind speed's influence on pollutants were expressed (PC2) along with overall measure of local meteorology (PC3). In summary, CCA and PCA results combined were successful in uncovering linear relationships between meteorology and air pollutants in Chicago and aided in determining possible pollutant sources.

  11. Population subdivision and molecular sequence variation: theory and analysis of Drosophila ananassae data.

    PubMed

    Vogl, Claus; Das, Aparup; Beaumont, Mark; Mohanty, Sujata; Stephan, Wolfgang

    2003-11-01

    Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.

  12. An Application of Variational Theory to an Integrated Walrasian Model of Exchange, Consumption and Production

    NASA Astrophysics Data System (ADS)

    Donato, M. B.; Milasi, M.; Vitanza, C.

    2010-09-01

    An existence result of a Walrasian equilibrium for an integrated model of exchange, consumption and production is obtained. The equilibrium model is characterized in terms of a suitable generalized quasi-variational inequality; so the existence result comes from an original technique which takes into account tools of convex and set-valued analysis.

  13. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  14. Linkages Between Global Vegetation and Climate: An Analysis Based on NOAA Advanced Very High Resolution Radiometer Data. Degree awarded by Vrije Universiteit, Amsterdam, Netherlands

    NASA Technical Reports Server (NTRS)

    Los, Sietse Oene

    1998-01-01

    A monthly global 1 degree by 1 degree data set from 1982 until 1990 was derived from data collected by the Advanced Very High Resolution Radiometer on board the NOAA 7, 9, and 11 satellites. This data set was used to study the interactions between variations in climate and variations in the "greenness" of vegetation. Studies with the Colorado State University atmospheric general circulation model coupled to the Simple Biosphere model showed a large sensitivity of the hydrological balance to changes in vegetation at low latitudes. The depletion of soil moisture as a result of increased vegetation density provided a negative feedback in an otherwise positive association between increased vegetation, increased evaporation, and increased precipitation proposed by Charney and coworkers. Analysis of climate data showed, at temperate to high latitudes, a positive association between variation in land surface temperature, sea surface temperature and vegetation greenness. At low latitudes the data indicated a positive association between variations in sea surface temperature, rainfall and vegetation greenness. The variations in mid- to high latitude temperatures affected the global average greenness and this could provide an explanation for the increased carbon uptake by the terrestrial surface over the past couple of decades.

  15. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  16. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  17. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  18. Between-centre variability in transfer function analysis, a widely used method for linear quantification of the dynamic pressure–flow relation: The CARNet study

    PubMed Central

    Meel-van den Abeelen, Aisha S.S.; Simpson, David M.; Wang, Lotte J.Y.; Slump, Cornelis H.; Zhang, Rong; Tarumi, Takashi; Rickards, Caroline A.; Payne, Stephen; Mitsis, Georgios D.; Kostoglou, Kyriaki; Marmarelis, Vasilis; Shin, Dae; Tzeng, Yu-Chieh; Ainslie, Philip N.; Gommer, Erik; Müller, Martin; Dorado, Alexander C.; Smielewski, Peter; Yelicich, Bernardo; Puppo, Corina; Liu, Xiuyun; Czosnyka, Marek; Wang, Cheng-Yen; Novak, Vera; Panerai, Ronney B.; Claassen, Jurgen A.H.R.

    2014-01-01

    Transfer function analysis (TFA) is a frequently used method to assess dynamic cerebral autoregulation (CA) using spontaneous oscillations in blood pressure (BP) and cerebral blood flow velocity (CBFV). However, controversies and variations exist in how research groups utilise TFA, causing high variability in interpretation. The objective of this study was to evaluate between-centre variability in TFA outcome metrics. 15 centres analysed the same 70 BP and CBFV datasets from healthy subjects (n = 50 rest; n = 20 during hypercapnia); 10 additional datasets were computer-generated. Each centre used their in-house TFA methods; however, certain parameters were specified to reduce a priori between-centre variability. Hypercapnia was used to assess discriminatory performance and synthetic data to evaluate effects of parameter settings. Results were analysed using the Mann–Whitney test and logistic regression. A large non-homogeneous variation was found in TFA outcome metrics between the centres. Logistic regression demonstrated that 11 centres were able to distinguish between normal and impaired CA with an AUC > 0.85. Further analysis identified TFA settings that are associated with large variation in outcome measures. These results indicate the need for standardisation of TFA settings in order to reduce between-centre variability and to allow accurate comparison between studies. Suggestions on optimal signal processing methods are proposed. PMID:24725709

  19. Genetic and epigenetic variation in the lineage specification of regulatory T cells

    PubMed Central

    Arvey, Aaron; van der Veeken, Joris; Plitas, George; Rich, Stephen S; Concannon, Patrick; Rudensky, Alexander Y

    2015-01-01

    Regulatory T (Treg) cells, which suppress autoimmunity and other inflammatory states, are characterized by a distinct set of genetic elements controlling their gene expression. However, the extent of genetic and associated epigenetic variation in the Treg cell lineage and its possible relation to disease states in humans remain unknown. We explored evolutionary conservation of regulatory elements and natural human inter-individual epigenetic variation in Treg cells to identify the core transcriptional control program of lineage specification. Analysis of single nucleotide polymorphisms in core lineage-specific enhancers revealed disease associations, which were further corroborated by high-resolution genotyping to fine map causal polymorphisms in lineage-specific enhancers. Our findings suggest that a small set of regulatory elements specify the Treg lineage and that genetic variation in Treg cell-specific enhancers may alter Treg cell function contributing to polygenic disease. DOI: http://dx.doi.org/10.7554/eLife.07571.001 PMID:26510014

  20. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    PubMed

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  1. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.

  2. Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron

    2013-01-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745

  3. The international Genome sample resource (IGSR): A worldwide collection of genome variation incorporating the 1000 Genomes Project data

    PubMed Central

    Clarke, Laura; Fairley, Susan; Zheng-Bradley, Xiangqun; Streeter, Ian; Perry, Emily; Lowy, Ernesto; Tassé, Anne-Marie; Flicek, Paul

    2017-01-01

    The International Genome Sample Resource (IGSR; http://www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38), and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data—previously only browseable through our FTP site—by focusing on particular samples, populations or data sets of interest. PMID:27638885

  4. [Toward exploration of morphological diversity of measurable traits of mammalian skull. 2. Scalar and vector parameters of the forms of group variation].

    PubMed

    Lisovskiĭ, A A; Pavlinov, I Ia

    2008-01-01

    Any morphospace is partitioned by the forms of group variation, its structure is described by a set of scalar (range, overlap) and vector (direction) characteristics. They are analyzed quantitatively for the sex and age variations in the sample of 200 skulls of the pine marten described by 14 measurable traits. Standard dispersion and variance components analyses are employed, accompanied with several resampling methods (randomization and bootstrep); effects of changes in the analysis design on results of the above methods are also considered. Maximum likelihood algorithm of variance components analysis is shown to give an adequate estimates of portions of particular forms of group variation within the overall disparity. It is quite stable in respect to changes of the analysis design and therefore could be used in the explorations of the real data with variously unbalanced designs. A new algorithm of estimation of co-directionality of particular forms of group variation within the overall disparity is elaborated, which includes angle measures between eigenvectors of covariation matrices of effects of group variations calculated by dispersion analysis. A null hypothesis of random portion of a given group variation could be tested by means of randomization of the respective grouping variable. A null hypothesis of equality of both portions and directionalities of different forms of group variation could be tested by means of the bootstrep procedure.

  5. Individual Differences in Dynamic Functional Brain Connectivity across the Human Lifespan.

    PubMed

    Davison, Elizabeth N; Turner, Benjamin O; Schlesinger, Kimberly J; Miller, Michael B; Grafton, Scott T; Bassett, Danielle S; Carlson, Jean M

    2016-11-01

    Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Dynamic network theory has been used to identify properties of dynamic brain function from fMRI data, but the majority of analyses and findings remain at the level of the group. Here, we apply hypergraph analysis, a method from dynamic network theory, to quantify individual differences in brain functional dynamics. Using a summary metric derived from the hypergraph formalism-hypergraph cardinality-we investigate individual variations in two separate, complementary data sets. The first data set ("multi-task") consists of 77 individuals engaging in four consecutive cognitive tasks. We observe that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, the analysis of one of the memory tasks revealed a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set ("age-memory"), in which 95 individuals, aged 18-75, performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that hypergraph analysis should serve as a useful tool in furthering our understanding of the dynamic network structure of the brain.

  6. GSCALite: A Web Server for Gene Set Cancer Analysis.

    PubMed

    Liu, Chun-Jie; Hu, Fei-Fei; Xia, Mengxuan; Han, Leng; Zhang, Qiong; Guo, An-Yuan

    2018-05-22

    The availability of cancer genomic data makes it possible to analyze genes related to cancer. Cancer is usually the result of a set of genes and the signal of a single gene could be covered by background noise. Here, we present a web server named Gene Set Cancer Analysis (GSCALite) to analyze a set of genes in cancers with the following functional modules. (i) Differential expression in tumor vs normal, and the survival analysis; (ii) Genomic variations and their survival analysis; (iii) Gene expression associated cancer pathway activity; (iv) miRNA regulatory network for genes; (v) Drug sensitivity for genes; (vi) Normal tissue expression and eQTL for genes. GSCALite is a user-friendly web server for dynamic analysis and visualization of gene set in cancer and drug sensitivity correlation, which will be of broad utilities to cancer researchers. GSCALite is available on http://bioinfo.life.hust.edu.cn/web/GSCALite/. guoay@hust.edu.cn or zhangqiong@hust.edu.cn. Supplementary data are available at Bioinformatics online.

  7. Identifying group discriminative and age regressive sub-networks from DTI-based connectivity via a unified framework of non-negative matrix factorization and graph embedding

    PubMed Central

    Ghanbari, Yasser; Smith, Alex R.; Schultz, Robert T.; Verma, Ragini

    2014-01-01

    Diffusion tensor imaging (DTI) offers rich insights into the physical characteristics of white matter (WM) fiber tracts and their development in the brain, facilitating a network representation of brain’s traffic pathways. Such a network representation of brain connectivity has provided a novel means of investigating brain changes arising from pathology, development or aging. The high dimensionality of these connectivity networks necessitates the development of methods that identify the connectivity building blocks or sub-network components that characterize the underlying variation in the population. In addition, the projection of the subject networks into the basis set provides a low dimensional representation of it, that teases apart different sources of variation in the sample, facilitating variation-specific statistical analysis. We propose a unified framework of non-negative matrix factorization and graph embedding for learning sub-network patterns of connectivity by their projective non-negative decomposition into a reconstructive basis set, as well as, additional basis sets representing variational sources in the population like age and pathology. The proposed framework is applied to a study of diffusion-based connectivity in subjects with autism that shows localized sparse sub-networks which mostly capture the changes related to pathology and developmental variations. PMID:25037933

  8. Microfocus diffraction from different regions of a protein crystal: structural variations and unit-cell polymorphism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Michael C.; Cascio, Duilio; Yeates, Todd O.

    Real macromolecular crystals can be non-ideal in a myriad of ways. This often creates challenges for structure determination, while also offering opportunities for greater insight into the crystalline state and the dynamic behavior of macromolecules. To evaluate whether different parts of a single crystal of a dynamic protein, EutL, might be informative about crystal and protein polymorphism, a microfocus X-ray synchrotron beam was used to collect a series of 18 separate data sets from non-overlapping regions of the same crystal specimen. A principal component analysis (PCA) approach was employed to compare the structure factors and unit cells across the datamore » sets, and it was found that the 18 data sets separated into two distinct groups, with largeRvalues (in the 40% range) and significant unit-cell variations between the members of the two groups. This categorization mapped the different data-set types to distinct regions of the crystal specimen. Atomic models of EutL were then refined against two different data sets obtained by separately merging data from the two distinct groups. A comparison of the two resulting models revealed minor but discernable differences in certain segments of the protein structure, and regions of higher deviation were found to correlate with regions where larger dynamic motions were predicted to occur by normal-mode molecular-dynamics simulations. The findings emphasize that large spatially dependent variations may be present across individual macromolecular crystals. This information can be uncovered by simultaneous analysis of multiple partial data sets and can be exploited to reveal new insights about protein dynamics, while also improving the accuracy of the structure-factor data ultimately obtained in X-ray diffraction experiments.« less

  9. Phenotypic and genome-wide association analysis of spike ethylene in diverse wheat genotypes under heat stress.

    PubMed

    Valluru, Ravi; Reynolds, Matthew P; Davies, William J; Sukumaran, Sivakumar

    2017-04-01

    The gaseous phytohormone ethylene plays an important role in spike development in wheat (Triticum aestivum). However, the genotypic variation and the genomic regions governing spike ethylene (SET) production in wheat under long-term heat stress remain unexplored. We investigated genotypic variation in the production of SET and its relationship with spike dry weight (SDW) in 130 diverse wheat elite lines and landraces under heat-stressed field conditions. We employed an Illumina iSelect 90K single nucleotide polymorphism (SNP) genotyping array to identify the genetic loci for SET and SDW through a genome-wide association study (GWAS) in a subset of the Wheat Association Mapping Initiative (WAMI) panel. The SET and SDW exhibited appreciable genotypic variation among wheat genotypes at the anthesis stage. There was a strong negative correlation between SET and SDW. The GWAS uncovered five and 32 significant SNPs for SET, and 22 and 142 significant SNPs for SDW, in glasshouse and field conditions, respectively. Some of these SNPs closely localized to the SNPs for plant height, suggesting close associations between plant height and spike-related traits. The phenotypic and genetic elucidation of SET and its relationship with SDW supports future efforts toward gene discovery and breeding wheat cultivars with reduced ethylene effects on yield under heat stress. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  10. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  11. RSAT 2015: Regulatory Sequence Analysis Tools

    PubMed Central

    Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A.; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M.; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2015-01-01

    RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. PMID:25904632

  12. Temporal and spatial variations in fly ash quality

    USGS Publications Warehouse

    Hower, J.C.; Trimble, A.S.; Eble, C.F.

    2001-01-01

    Fly ash quality, both as the amount of petrographically distinguishable carbons and in chemistry, varies in both time and space. Temporal variations are a function of a number of variables. Variables can include variations in the coal blend organic petrography, mineralogy, and chemistry; variations in the pulverization of the coal, both as a function of the coal's Hardgrove grindability index and as a function of the maintenance and settings of the pulverizers; and variations in the operating conditions of the boiler, including changes in the pollution control system. Spatial variation, as an instantaneous measure of fly ash characteristics, should not involve changes in the first two sets of variables listed above. Spatial variations are a function of the gas flow within the boiler and ducts, certain flow conditions leading to a tendency for segregation of the less-dense carbons in one portion of the gas stream. Caution must be applied in sampling fly ash. Samples from a single bin, or series of bins, m ay not be representative of the whole fly ash, providing a biased view of the nature of the material. Further, it is generally not possible to be certain about variation until the analysis of the ash is complete. ?? 2001 Elsevier Science B.V. All rights reserved.

  13. The relationship between observational scale and explained variance in benthic communities

    PubMed Central

    Flood, Roger D.; Frisk, Michael G.; Garza, Corey D.; Lopez, Glenn R.; Maher, Nicole P.

    2018-01-01

    This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys (< 100 m) was estimated to be 36–59% of the total. Once adjusted for this small-scale variation, > 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present. PMID:29324746

  14. Model-independent constraints on possible modifications of Newtonian gravity

    NASA Technical Reports Server (NTRS)

    Talmadge, C.; Berthias, J.-P.; Hellings, R. W.; Standish, E. M.

    1988-01-01

    New model-independent constraints on possible modifications of Newtonian gravity over solar-system distance scales are presented, and their implications discussed. The constraints arise from the analysis of various planetary astrometric data sets. The results of the model-independent analysis are then applied to set limits on a variation in the l/r-squared behavior of gravity, on possible Yukawa-type interactions with ranges of the order of planetary distance scales, and on a deviation from Newtonian gravity of the type discussed by Milgrom (1983).

  15. Closure and ratio correlation analysis of lunar chemical and grain size data

    NASA Technical Reports Server (NTRS)

    Butler, J. C.

    1976-01-01

    Major element and major element plus trace element analyses were selected from the lunar data base for Apollo 11, 12 and 15 basalt and regolith samples. Summary statistics for each of the six data sets were compiled, and the effects of closure on the Pearson product moment correlation coefficient were investigated using the Chayes and Kruskal approximation procedure. In general, there are two types of closure effects evident in these data sets: negative correlations of intermediate size which are solely the result of closure, and correlations of small absolute value which depart significantly from their expected closure correlations which are of intermediate size. It is shown that a positive closure correlation will arise only when the product of the coefficients of variation is very small (less than 0.01 for most data sets) and, in general, trace elements in the lunar data sets exhibit relatively large coefficients of variation.

  16. Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.

    PubMed

    Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon

    2016-07-01

    Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®

  17. Evaluation of between-cow variation in milk urea and rumen ammonia nitrogen concentrations and the association with nitrogen utilization and diet digestibility in lactating cows.

    PubMed

    Huhtanen, P; Cabezas-Garcia, E H; Krizsan, S J; Shingfield, K J

    2015-05-01

    Concentrations of milk urea N (MUN) are influenced by dietary crude protein concentration and intake and could therefore be used as a biomarker of the efficiency of N utilization for milk production (milk N/N intake; MNE) in lactating cows. In the present investigation, data from milk-production trials (production data set; n=1,804 cow/period observations from 21 change-over studies) and metabolic studies involving measurements of nutrient flow at the omasum in lactating cows (flow data set; n=450 cow/period observations from 29 studies) were used to evaluate the influence of between-cow variation on the relationship of MUN with MNE, urinary N (UN) output, and diet digestibility. All measurements were made on cows fed diets based on grass silage supplemented with a range of protein supplements. Data were analyzed by mixed-model regression analysis with diet within experiment and period within experiment as random effects, allowing the effect of diet and period to be excluded. Between-cow coefficient of variation in MUN concentration and MNE was 0.13 and 0.07 in the production data set and 0.11 and 0.08 in the flow data set, respectively. Based on residual variance, the best model for predicting MNE developed from the production data set was MNE (g/kg)=238 + 7.0 × milk yield (MY; kg/d) - 0.064 × MY(2) - 2.7 × MUN (mg/dL) - 0.10 body weight (kg). For the flow data set, including both MUN and rumen ammonia N concentration with MY in the model accounted for more variation in MNE than when either term was used with MY alone. The best model for predicting UN excretion developed from the production data set (n=443) was UN (g/d)=-29 + 4.3 × dry matter intake (kg/d) + 4.3 × MUN + 0.14 × body weight. Between-cow variation had a smaller influence on the association of MUN with MNE and UN output than published estimates of these relationships based on treatment means, in which differences in MUN generally arise from variation in dietary crude protein concentration. For the flow data set, between-cow variation in MUN and rumen ammonia N concentrations was positively associated with total-tract organic matter digestibility. In conclusion, evaluation of phenotypic variation in MUN indicated that between-cow variation in MUN had a smaller effect on MNE compared with published responses of MUN to dietary crude protein concentration, suggesting that a closer control over diet composition relative to requirements has greater potential to improve MNE and lower UN on farm than genetic selection. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Bay of Fundy verification of a system for multidate Landsat measurement of suspended sediment

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Afoldi, T. T.; Amos, C. L.

    1981-01-01

    A system for automated multidate Landsat CCT MSS measurement of suspended sediment concentration (S) has been implemented and verified on nine sets (108 points) of data from the Bay of Fundy, Canada. The system employs 'chromaticity analysis' to provide automatic pixel-by-pixel adjustment of atmospheric variations, permitting reference calibration data from one or several dates to be spatially and temporally extrapolated to other regions and to other dates. For verification, each data set was used in turn as test data against the remainder as a calibration set: the average absolute error was 44 percent of S over the range 1-1000 mg/l. The system can be used to measure chlorophyll (in the absence of atmospheric variations), Secchi disk depth, and turbidity.

  19. BioVLAB-mCpG-SNP-EXPRESS: A system for multi-level and multi-perspective analysis and exploration of DNA methylation, sequence variation (SNPs), and gene expression from multi-omics data.

    PubMed

    Chae, Heejoon; Lee, Sangseon; Seo, Seokjun; Jung, Daekyoung; Chang, Hyeonsook; Nephew, Kenneth P; Kim, Sun

    2016-12-01

    Measuring gene expression, DNA sequence variation, and DNA methylation status is routinely done using high throughput sequencing technologies. To analyze such multi-omics data and explore relationships, reliable bioinformatics systems are much needed. Existing systems are either for exploring curated data or for processing omics data in the form of a library such as R. Thus scientists have much difficulty in investigating relationships among gene expression, DNA sequence variation, and DNA methylation using multi-omics data. In this study, we report a system called BioVLAB-mCpG-SNP-EXPRESS for the integrated analysis of DNA methylation, sequence variation (SNPs), and gene expression for distinguishing cellular phenotypes at the pairwise and multiple phenotype levels. The system can be deployed on either the Amazon cloud or a publicly available high-performance computing node, and the data analysis and exploration of the analysis result can be conveniently done using a web-based interface. In order to alleviate analysis complexity, all the process are fully automated, and graphical workflow system is integrated to represent real-time analysis progression. The BioVLAB-mCpG-SNP-EXPRESS system works in three stages. First, it processes and analyzes multi-omics data as input in the form of the raw data, i.e., FastQ files. Second, various integrated analyses such as methylation vs. gene expression and mutation vs. methylation are performed. Finally, the analysis result can be explored in a number of ways through a web interface for the multi-level, multi-perspective exploration. Multi-level interpretation can be done by either gene, gene set, pathway or network level and multi-perspective exploration can be explored from either gene expression, DNA methylation, sequence variation, or their relationship perspective. The utility of the system is demonstrated by performing analysis of phenotypically distinct 30 breast cancer cell line data set. BioVLAB-mCpG-SNP-EXPRESS is available at http://biohealth.snu.ac.kr/software/biovlab_mcpg_snp_express/. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. How many rumbles are there? Acoustic variation and individual identity in the rumble vocalizations of African elephants (Loxodonta africana)

    NASA Astrophysics Data System (ADS)

    Soltis, Joseph M.; Savage, Anne; Leong, Kirsten M.

    2004-05-01

    The most commonly occurring elephant vocalization is the rumble, a frequency-modulated call with infrasonic components. Upwards of ten distinct rumble subtypes have been proposed, but little quantitative work on the acoustic properties of rumbles has been conducted. Rumble vocalizations (N=269) from six females housed at Disney's Animal Kingdom were analyzed. Vocalizations were recorded from microphones in collars around subject necks, and rumbles were digitized and measured using SIGNAL software. Sixteen acoustic variables were measured for each call, extracting both source and filter features. Multidimensional scaling analysis indicates that there are no acoustically distinct rumble subtypes, but that there is quantitative variation across rumbles. Discriminant function analysis showed that the acoustic characteristics of rumbles differ across females. A classification success rate of 65% was achieved when assigning unselected rumbles to one of the six females (test set =64 calls) according to the functions derived from the originally selected calls (training set =205 calls). The rumble is best viewed as a single call type with graded variation, but information regarding individual identity is encoded in female rumbles.

  1. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1980-01-01

    The column normalizing technique was used to adjust the data for variations in the amplitude of the signal due to look angle effects with respect to solar zenith angle along the scan lines (i.e., across columns). Evaluation of the data set containing the geometric and radiometric adjustments, indicates that the data set should be satisfactory for further processing and analysis. Software was developed for degrading the spatial resolution of the aircraft data to produce a total of four data sets for further analysis. The quality of LANDSAT 2 CCT data for the test site is good for channels four, five, and six. Channel seven was not present on the tape. The data received were reformatted and analysis of the test site area was initiated.

  2. Estimation of austral summer net community production in the Amundsen Sea: Self-organizing map analysis approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Hahm, D.; Lee, D. G.; Rhee, T. S.; Kim, H. C.

    2014-12-01

    The Amundsen Sea, Antarctica, has been known for one of the most susceptible region to the current climate change such as sea ice melting and sea surface temperature change. In the Southern Ocean, a predominant amount of primary production is occurring in the continental shelf region. Phytoplankton blooms take place during the austral summer due to the limited sunlit and sea ice cover. Thus, quantifying the variation of summer season net community production (NCP) in the Amundsen Sea is essential to analyze the influence of climate change to the variation of biogeochemical cycle in the Southern Ocean. During the past three years of 2011, 2012 and 2014 in austral summer, we have conducted underway observations of ΔO2/Ar and derived NCP of the Amundsen Sea. Despite the importance of NCP for understanding biological carbon cycle of the ocean, the observations are rather limited to see the spatio-temporal variation in the Amundsen Sea. Therefore, we applied self-organizing map (SOM) analysis to expand our observed data sets and estimate the NCP during the summer season. SOM analysis, a type of artificial neural network, has been proved to be a useful method for extracting and classifying features in geoscience. In oceanography, SOM has applied for the analysis of various properties of the seawater such as sea surface temperature, chlorophyll concentration, pCO2, and NCP. Especially it is useful to expand a spatial coverage of direct measurements or to estimate properties whose satellite observations are technically or spatially limited. In this study, we estimate summer season NCP and find a variables set which optimally delineates the NCP variation in the Amundsen Sea as well. Moreover, we attempt to analyze the interannual variation of the Amundsen Sea NCP by taking climatological factors into account for the SOM analysis.

  3. Residual stress analysis of welded joints by the variational eigenstrain approach

    NASA Astrophysics Data System (ADS)

    Korsunsky, Alexander M.; Regino, Gabriel; Nowell, David

    2005-04-01

    We present the formulation for finding the distribution of eigenstrains, i.e. the sources of residual stress, from a set of measurements of residual elastic strain (e.g. by diffraction), or residual stress, or stress redistribution, or distortion. The variational formulation employed seeks to achieve the best agreement between the model prediction and some measured parameters in the sense of a minimum of a functional given by a sum over the entire set of measurements. The advantage of this approach lies in its flexibility: different sets of measurements and information about different components of the stress-strain state can be incorporated. We demonstrate the power of the technique by analysing experimental data for welds in thin sheet of a nickel superalloy aerospace material. Very good agreement can be achieved between the prediction and the measurement results without the necessity of using iterative solution. In practice complete characterisation of residual stress states is often very difficult, due to limitations of facility access, measurement time or specimen dimensions. Implications of the new technique for experimental analysis are all the more significant, since it allows the reconstruction of the entire stress state from incomplete sets of data.

  4. Demonstration of Systematic Improvements in Application of the Variational Method to Strongly Bound Potentials

    ERIC Educational Resources Information Center

    Ninemire, B.; Mei, W. N.

    2004-01-01

    In applying the variational method, six different sets of trial wave functions are used to calculate the ground state and first excited state energies of the strongly bound potentials, i.e. V(x)=x[2m], where m = 4, 5 and 6. It is shown that accurate results can be obtained from thorough analysis of the asymptotic behaviour of the solutions.…

  5. The international Genome sample resource (IGSR): A worldwide collection of genome variation incorporating the 1000 Genomes Project data.

    PubMed

    Clarke, Laura; Fairley, Susan; Zheng-Bradley, Xiangqun; Streeter, Ian; Perry, Emily; Lowy, Ernesto; Tassé, Anne-Marie; Flicek, Paul

    2017-01-04

    The International Genome Sample Resource (IGSR; http://www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38), and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data-previously only browseable through our FTP site-by focusing on particular samples, populations or data sets of interest. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Individual Differences in Dynamic Functional Brain Connectivity across the Human Lifespan

    PubMed Central

    Davison, Elizabeth N.; Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.

    2016-01-01

    Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Dynamic network theory has been used to identify properties of dynamic brain function from fMRI data, but the majority of analyses and findings remain at the level of the group. Here, we apply hypergraph analysis, a method from dynamic network theory, to quantify individual differences in brain functional dynamics. Using a summary metric derived from the hypergraph formalism—hypergraph cardinality—we investigate individual variations in two separate, complementary data sets. The first data set (“multi-task”) consists of 77 individuals engaging in four consecutive cognitive tasks. We observe that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, the analysis of one of the memory tasks revealed a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set (“age-memory”), in which 95 individuals, aged 18–75, performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that hypergraph analysis should serve as a useful tool in furthering our understanding of the dynamic network structure of the brain. PMID:27880785

  7. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance.

    PubMed

    Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo

    2017-01-01

    We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  8. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Students' Appreciation of Expectation and Variation as a Foundation for Statistical Understanding

    ERIC Educational Resources Information Center

    Watson, Jane M.; Callingham, Rosemary A.; Kelly, Ben A.

    2007-01-01

    This study presents the results of a partial credit Rasch analysis of in-depth interview data exploring statistical understanding of 73 school students in 6 contextual settings. The use of Rasch analysis allowed the exploration of a single underlying variable across contexts, which included probability sampling, representation of temperature…

  10. Documentation of operational protocol for the use of MAMA software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Daniel S.

    2016-01-21

    Image analysis of Scanning Electron Microscope (SEM) micrographs is a complex process that can vary significantly between analysts. The factors causing the variation are numerous, and the purpose of Task 2b is to develop and test a set of protocols designed to minimize variation in image analysis between different analysts and laboratories, specifically using the MAMA software package, Version 2.1. The protocols were designed to be “minimally invasive”, so that expert SEM operators will not be overly constrained in the way they analyze particle samples. The protocols will be tested using a round-robin approach where results from expert SEM usersmore » at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory, Savannah River National Laboratory, and the National Institute of Standards and Testing will be compared. The variation of the results will be used to quantify uncertainty in the particle image analysis process. The round-robin exercise will proceed with 3 levels of rigor, each with their own set of protocols, as described below in Tasks 2b.1, 2b.2, and 2b.3. The uncertainty will be developed using NIST standard reference material SRM 1984 “Thermal Spray Powder – Particle Size Distribution, Tungsten Carbide/Cobalt (Acicular)” [Reference 1]. Full details are available in the Certificate of Analysis, posted on the NIST website (http://www.nist.gov/srm/).« less

  11. Derivation of a needs based capitation formula for allocating prescribing budgets to health authorities and primary care groups in England: regression analysis

    PubMed Central

    Rice, Nigel; Dixon, Paul; Lloyd, David C E F; Roberts, David

    2000-01-01

    Objective To develop a weighted capitation formula for setting target allocations for prescribing expenditures for health authorities and primary care groups in England. Design Regression analysis relating prescribing costs to the demographic, morbidity, and mortality composition of practice lists. Setting 8500 general practices in England. Subjects Data from the 1991 census were attributed to practice lists on the basis of the place of residence of the practice population. Main outcome measures Variation in age, sex, and temporary resident originated prescribing units (ASTRO(97)-PUs) adjusted net ingredient cost of general practices in England for 1997-8 modelled for the impact of health and social needs after controlling for differences in supply. Results A needs gradient based on the four variables: permanent sickness, percentage of dependants in no carer households, percentage of students, and percentage of births on practice lists. These, together with supply characteristics, explained 41% of variation in prescribing costs per ASTRO(97)-PU adjusted capita across practices. The latter alone explained about 35% of variation in total costs per head across practices. Conclusions The model has good statistical specification and contains intuitively plausible needs drivers of prescribing expenditure. Together with adjustments made for differences in ASTRO(97)-PUs the model is capable of explaining 62% (35%+0.65% (41%)) of variation in prescribing expenditure at practice level. The results of the study have formed the basis for setting target budgets for 1999-2000 allocations for prescribing expenditure for health authorities and primary care groups. PMID:10650026

  12. Cladistic analyses of behavioural variation in wild Pan troglodytes: exploring the chimpanzee culture hypothesis.

    PubMed

    Lycett, Stephen J; Collard, Mark; McGrew, William C

    2009-10-01

    Long-term field studies have revealed considerable behavioural differences among groups of wild Pan troglodytes. Here, we report three sets of cladistic analyses that were designed to shed light on issues relating to this interpopulation variation that are of particular relevance to palaeoanthropology. In the first set of analyses, we focused on the proximate cause of the variation. Some researchers have argued that it is cultural, while others have suggested that it is the result of genetic differences. Because the eastern and western subspecies of P. troglodytes are well differentiated genetically while groups within the subspecies are not, we reasoned that if the genetic hypothesis is correct, the phylogenetic signal should be stronger when data from the eastern and western subspecies are analysed together compared to when data from only the eastern subspecies are analysed. Using randomisation procedures, we found that the phylogenetic signal was substantially stronger with in a single subspecies rather than with two. The results of the first sets of analyses, therefore, were inconsistent with the predictions of the genetic hypothesis. The other two sets of analyses built on the results of the first and assumed that the intergroup behavioural variation is cultural in nature. Recent work has shown that, contrary to what anthropologists and archaeologists have long believed, vertical intergroup transmission is often more important than horizontal intergroup transmission in human cultural evolution. In the second set of analyses, we sought to determine how important vertical transmission has been in the evolution of chimpanzee cultural diversity. The first analysis we carried out indicated that the intergroup similarities and differences in behaviour are consistent with the divergence of the western and eastern subspecies, which is what would be expected if vertical intergroup transmission has been the dominant process. In the second analysis, we found that the chimpanzee cultural data are not only comparable to a series of modern human cultural data sets in terms of how tree-like they are, but are also comparable to a series of genetic, anatomical, and behavioural data sets that can be assumed to have been produced by a branching process. Again, this is what would be expected if vertical inter-group transmission has been the dominant process in chimpanzee cultural evolution. Human culture has long been considered to be adaptive, but recent studies have suggested that this needs to be demonstrated rather than assumed. With this in mind, in the third set of analyses we investigated whether chimpanzee culture is adaptive. We found the hypothesis that chimpanzee culture is adaptive was supported by an analysis of data from the Eastern African subspecies, but not by an analysis of data from the eastern and western subspecies. The results of our analyses have implications for the number of subspecies in Pan troglodytes, the relationship between hominin taxa and Palaeolithic industries, and the evolution of hominin cognition and behaviour.

  13. Charm and beauty quark masses in the MMHT2014 global PDF analysis.

    PubMed

    Harland-Lang, L A; Martin, A D; Motylinski, P; Thorne, R S

    We investigate the variation in the MMHT2014 PDFs when we allow the heavy-quark masses [Formula: see text] and [Formula: see text] to vary away from their default values. We make PDF sets available in steps of [Formula: see text] and [Formula: see text], and present the variation in the PDFs and in the predictions. We examine the comparison to the HERA data on charm and beauty structure functions and note that in each case the heavy-quark data, and the inclusive data, have a slight preference for lower masses than our default values. We provide PDF sets with three and four active quark flavours, as well as the standard value of five flavours. We use the pole mass definition of the quark masses, as in the default MMHT2014 analysis, but briefly comment on the [Formula: see text] definition.

  14. Loss model for off-design performance analysis of radial turbines with pivoting-vane, variable-area stators

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.; Glassman, A. J.

    1980-01-01

    An off-design performance loss model for a radial turbine with pivoting, variable-area stators is developed through a combination of analytical modeling and experimental data analysis. A viscous loss model is used for the variation in stator loss with setting angle, and stator vane end-clearance leakage effects are predicted by a clearance flow model. The variation of rotor loss coefficient with stator setting angle is obtained by means of an analytical matching of experimental data for a rotor that was tested with six stators, having throat areas from 20 to 144% of the design area. An incidence loss model is selected to obtain best agreement with experimental data. The stator vane end-clearance leakage model predicts increasing mass flow and decreasing efficiency as a result of end-clearances, with changes becoming significantly larger with decreasing stator area.

  15. Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data.

    PubMed

    Nielsen, Allan Aasbjerg

    2002-01-01

    This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multisource, multiset, or multitemporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which, when applied in remote sensing, exhibit ever-decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study, CVs are calculated from Landsat Thematic Mapper (TM) data with six spectral bands over six consecutive years. Both Rand T-mode CVs clearly exhibit the desired characteristic: they show maximum similarity for the low-order canonical variates and minimum similarity for the high-order canonical variates. These characteristics are seen both visually and in objective measures. The results from the multiset CCA R- and T-mode analyses are very different. This difference is ascribed to the noise structure in the data. The CCA methods are related to partial least squares (PLS) methods. This paper very briefly describes multiset CCA-based multiset PLS. Also, the CCA methods can be applied as multivariate extensions to empirical orthogonal functions (EOF) techniques. Multiset CCA is well-suited for inclusion in geographical information systems (GIS).

  16. The Molecular Signatures Database (MSigDB) hallmark gene set collection.

    PubMed

    Liberzon, Arthur; Birger, Chet; Thorvaldsdóttir, Helga; Ghandi, Mahmoud; Mesirov, Jill P; Tamayo, Pablo

    2015-12-23

    The Molecular Signatures Database (MSigDB) is one of the most widely used and comprehensive databases of gene sets for performing gene set enrichment analysis. Since its creation, MSigDB has grown beyond its roots in metabolic disease and cancer to include >10,000 gene sets. These better represent a wider range of biological processes and diseases, but the utility of the database is reduced by increased redundancy across, and heterogeneity within, gene sets. To address this challenge, here we use a combination of automated approaches and expert curation to develop a collection of "hallmark" gene sets as part of MSigDB. Each hallmark in this collection consists of a "refined" gene set, derived from multiple "founder" sets, that conveys a specific biological state or process and displays coherent expression. The hallmarks effectively summarize most of the relevant information of the original founder sets and, by reducing both variation and redundancy, provide more refined and concise inputs for gene set enrichment analysis.

  17. Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception.

    PubMed

    Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil

    2017-01-01

    Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness.

  18. Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception

    PubMed Central

    Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil

    2017-01-01

    Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness. PMID:28936171

  19. Discrimination of Bacillus anthracis from closely related microorganisms by analysis of 16S and 23S rRNA with oligonucleotide microchips

    DOEpatents

    Bavykin, Sergei G.; Mirzabekov, Andrei D.

    2007-10-30

    The present invention is directed to a novel method of discriminating a highly infectious bacterium Bacillus anthracis from a group of closely related microorganisms. Sequence variations in the 16S and 23S rRNA of the B. cereus subgroup including B. anthracis are utilized to construct an array that can detect these sequence variations through selective hybridizations. The identification and analysis of these sequence variations enables positive discrimination of isolates of the B. cereus group that includes B. anthracis. Discrimination of single base differences in rRNA was achieved with a microchip during analysis of B. cereus group isolates from both single and in mixed probes, as well as identification of polymorphic sites. Successful use of a microchip to determine the appropriate subgroup classification using eight reference microorganisms from the B. cereus group as a study set, was demonstrated.

  20. Mixed variational formulations of finite element analysis of elastoacoustic/slosh fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Ohayon, Roger

    1991-01-01

    A general three-field variational principle is obtained for the motion of an acoustic fluid enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. This principle contains a free parameter alpha. Semidiscrete finite-element equations of motion based on this principle are displayed and applied to the transient response and free-vibrations of the coupled fluid-structure problem. It is shown that a particular setting of alpha yields a rich set of formulations that can be customized to fit physical and computational requirements. The variational principle is then extended to handle slosh motions in a uniform gravity field, and used to derive semidiscrete equations of motion that account for such effects.

  1. Quantitative trait nucleotide analysis using Bayesian model selection.

    PubMed

    Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D

    2005-10-01

    Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.

  2. The Global Precipitation Climatology Project (GPCP): Results, Status and Future

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.

    2007-01-01

    The Global Precipitation Climatology Project (GPCP) is one of a number of long-term, satellite-based, global analyses routinely produced under the auspices of the World Climate Research Program (WCRP) and its Global Energy and Watercycle EXperiment (GEWEX) program. The research quality analyses are produced a few months after real-time through the efforts of scientists at various national agencies and universities in the U.S., Europe and Japan. The primary product is a monthly analysis of surface precipitation that is globally complete and spans the period 1979-present. There are also pentad analyses for the same period and a daily analysis for the 1997-present period. Although generated with somewhat different data sets and analysis schemes, the pentad and daily data sets are forced to agree with the primary monthly analysis on a grid box by grid box basis. The primary input data sets are from low-orbit passive microwave observations, geostationary infrared observations and surface raingauge information. Examples of research with the data sets are discussed, focusing on tropical (25N-25s) rainfall variations and possible long-term changes in the 28-year (1979-2006) monthly dataset. Techniques are used to discriminate among the variations due to ENSO, volcanic events and possible long-term changes for rainfall over both land and ocean. The impact of the two major volcanic eruptions over the past 25 years is estimated to be about a 5% maximum reduction in tropical rainfall during each event. Although the global change of precipitation in the data set is near zero, a small upward linear change over tropical ocean (0.06 mm/day/l0yr) and a slight downward linear change over tropical land (-0.03 mm/day/l0yr) are examined to understand the impact of the inhomogeneity in the data record and the length of the data set. These positive changes correspond to about a 5% increase (ocean) and 3% increase (ocean plus land) during this time period. Relations between variations in surface temperature and precipitation are analyzed on seasonal to inter-decadal time scales. A new, version 3 of GPCP is being planned to incorporate new satellite information (e.g., TRMM) and provide higher spatial and temporal resolution for at least part of the data record. The goals and plans for that GPCP re-processing will be outlined.

  3. RSAT 2015: Regulatory Sequence Analysis Tools.

    PubMed

    Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2015-07-01

    RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. A comprehensive custom panel design for routine hereditary cancer testing: preserving control, improving diagnostics and revealing a complex variation landscape.

    PubMed

    Castellanos, Elisabeth; Gel, Bernat; Rosas, Inma; Tornero, Eva; Santín, Sheila; Pluvinet, Raquel; Velasco, Juan; Sumoy, Lauro; Del Valle, Jesús; Perucho, Manuel; Blanco, Ignacio; Navarro, Matilde; Brunet, Joan; Pineda, Marta; Feliubadaló, Lidia; Capellá, Gabi; Lázaro, Conxi; Serra, Eduard

    2017-01-04

    We wanted to implement an NGS strategy to globally analyze hereditary cancer with diagnostic quality while retaining the same degree of understanding and control we had in pre-NGS strategies. To do this, we developed the I2HCP panel, a custom bait library covering 122 hereditary cancer genes. We improved bait design, tested different NGS platforms and created a clinically driven custom data analysis pipeline. The I2HCP panel was developed using a training set of hereditary colorectal cancer, hereditary breast and ovarian cancer and neurofibromatosis patients and reached an accuracy, analytical sensitivity and specificity greater than 99%, which was maintained in a validation set. I2HCP changed our diagnostic approach, involving clinicians and a genetic diagnostics team from panel design to reporting. The new strategy improved diagnostic sensitivity, solved uncertain clinical diagnoses and identified mutations in new genes. We assessed the genetic variation in the complete set of hereditary cancer genes, revealing a complex variation landscape that coexists with the disease-causing mutation. We developed, validated and implemented a custom NGS-based strategy for hereditary cancer diagnostics that improved our previous workflows. Additionally, the existence of a rich genetic variation in hereditary cancer genes favors the use of this panel to investigate their role in cancer risk.

  5. Global Precipitation Variations and Long-term Changes Derived from the GPCP Monthly Product

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Gu, Guojun; Huffman, George; Curtis, Scott

    2005-01-01

    Global and large regional rainfall variations and possible long-term changes are examined using the 25-year (1979-2004) monthly dataset from the Global Precipitation Climatology Project (GPCP). The emphasis is to discriminate among the variations due to ENSO, volcanic events and possible long-term changes. Although the global change of precipitation in the data set is near zero, the data set does indicate an upward trend (0.13 mm/day/25yr) and a downward trend (-0.06 mm/day/25yr) over tropical oceans and lands (25S-25N), respectively. This corresponds to a 4% increase (ocean) and 2% decrease (land) during this time period. Techniques are applied to attempt to eliminate variations due to ENSO and major volcanic eruptions. The impact of the two major volcanic eruptions over the past 25 years is estimated to be about a 5% reduction in tropical rainfall. The modified data set (with ENSO and volcano effect removed) retains the same approximate change slopes, but with reduced variance leading to significance tests with results in the 90-95% range. Inter-comparisons between the GPCP, SSWI (1988-2004), and TRMM (1998-2004) rainfall products are made to increase or decrease confidence in the changes seen in the GPCP analysis.

  6. Psychometric Inferences from a Meta-Analysis of Reliability and Internal Consistency Coefficients

    ERIC Educational Resources Information Center

    Botella, Juan; Suero, Manuel; Gambara, Hilda

    2010-01-01

    A meta-analysis of the reliability of the scores from a specific test, also called reliability generalization, allows the quantitative synthesis of its properties from a set of studies. It is usually assumed that part of the variation in the reliability coefficients is due to some unknown and implicit mechanism that restricts and biases the…

  7. Study of self-focusing of Non Gaussian laser beam in a plasma with density variation using moment theory approach

    NASA Astrophysics Data System (ADS)

    Pathak, Nidhi; Kaur, Sukhdeep; Singh, Sukhmander

    2018-05-01

    In this paper, self-focusing/defocusing effects have been studied by taking into account the combined effect of ponder-motive and relativistic non linearity during the laser plasma interaction with density variation. The formulation is based on the numerical analysis of second order nonlinear differential equation for appropriate set of laser and plasma parameters by employing moment theory approach. We found that self-focusing increases with increasing the laser intensity and density variation. The results obtained are valuable in high harmonic generation, inertial confinement fusion and charge particle acceleration.

  8. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  9. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Organization and variation analysis of 5S rDNA in different ploidy-level hybrids of red crucian carp × topmouth culter.

    PubMed

    He, Weiguo; Qin, Qinbo; Liu, Shaojun; Li, Tangluo; Wang, Jing; Xiao, Jun; Xie, Lihua; Zhang, Chun; Liu, Yun

    2012-01-01

    Through distant crossing, diploid, triploid and tetraploid hybrids of red crucian carp (Carassius auratus red var., RCC♀, Cyprininae, 2n = 100) × topmouth culter (Erythroculter ilishaeformis Bleeker, TC♂, Cultrinae, 2n = 48) were successfully produced. Diploid hybrids possessed 74 chromosomes with one set from RCC and one set from TC; triploid hybrids harbored 124 chromosomes with two sets from RCC and one set from TC; tetraploid hybrids had 148 chromosomes with two sets from RCC and two sets from TC. The 5S rDNA of the three different ploidy-level hybrids and their parents were sequenced and analyzed. There were three monomeric 5S rDNA classes (designated class I: 203 bp; class II: 340 bp; and class III: 477 bp) in RCC and two monomeric 5S rDNA classes (designated class IV: 188 bp, and class V: 286 bp) in TC. In the hybrid offspring, diploid hybrids inherited three 5S rDNA classes from their female parent (RCC) and only class IV from their male parent (TC). Triploid hybrids inherited class II and class III from their female parent (RCC) and class IV from their male parent (TC). Tetraploid hybrids gained class II and class III from their female parent (RCC), and generated a new 5S rDNA sequence (designated class I-N). The specific paternal 5S rDNA sequence of class V was not found in the hybrid offspring. Sequence analysis of 5S rDNA revealed the influence of hybridization and polyploidization on the organization and variation of 5S rDNA in fish. This is the first report on the coexistence in vertebrates of viable diploid, triploid and tetraploid hybrids produced by crossing parents with different chromosome numbers, and these new hybrids are novel specimens for studying the genomic variation in the first generation of interspecific hybrids, which has significance for evolution and fish genetics.

  11. Accuracy, repeatability, and reproducibility of Artemis very high-frequency digital ultrasound arc-scan lateral dimension measurements

    PubMed Central

    Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson

    2008-01-01

    Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860

  12. Managing in-hospital quality improvement: An importance-performance analysis to set priorities for ST-elevation myocardial infarction care.

    PubMed

    Aeyels, Daan; Seys, Deborah; Sinnaeve, Peter R; Claeys, Marc J; Gevaert, Sofie; Schoors, Danny; Sermeus, Walter; Panella, Massimiliano; Bruyneel, Luk; Vanhaecht, Kris

    2018-02-01

    A focus on specific priorities increases the success rate of quality improvement efforts for broad and complex-care processes. Importance-performance analysis presents a possible approach to set priorities around which to design and implement effective quality improvement initiatives. Persistent variation in hospital performance makes ST-elevation myocardial infarction care relevant to consider for importance-performance analysis. The purpose of this study was to identify quality improvement priorities in ST-elevation myocardial infarction care. Importance and performance levels of ST-elevation myocardial infarction key interventions were combined in an importance-performance analysis. Content validity indexes on 23 ST-elevation myocardial infarction key interventions of a multidisciplinary RAND Delphi Survey defined importance levels. Structured review of 300 patient records in 15 acute hospitals determined performance levels. The significance of between-hospital variation was determined by a Kruskal-Wallis test. A performance heat-map allowed for hospital-specific priority setting. Seven key interventions were each rated as an overall improvement priority. Priority key interventions related to risk assessment, timely reperfusion by percutaneous coronary intervention and secondary prevention. Between-hospital performance varied significantly for the majority of key interventions. The type and number of priorities varied strongly across hospitals. Guideline adherence in ST-elevation myocardial infarction care is low and improvement priorities vary between hospitals. Importance-performance analysis helps clinicians and management in demarcation of the nature, number and order of improvement priorities. By offering a tailored improvement focus, this methodology makes improvement efforts more specific and achievable.

  13. [Study on discrimination of varieties of fire resistive coating for steel structure based on near-infrared spectroscopy].

    PubMed

    Xue, Gang; Song, Wen-qi; Li, Shu-chao

    2015-01-01

    In order to achieve the rapid identification of fire resistive coating for steel structure of different brands in circulating, a new method for the fast discrimination of varieties of fire resistive coating for steel structure by means of near infrared spectroscopy was proposed. The raster scanning near infrared spectroscopy instrument and near infrared diffuse reflectance spectroscopy were applied to collect the spectral curve of different brands of fire resistive coating for steel structure and the spectral data were preprocessed with standard normal variate transformation(standard normal variate transformation, SNV) and Norris second derivative. The principal component analysis (principal component analysis, PCA)was used to near infrared spectra for cluster analysis. The analysis results showed that the cumulate reliabilities of PC1 to PC5 were 99. 791%. The 3-dimentional plot was drawn with the scores of PC1, PC2 and PC3 X 10, which appeared to provide the best clustering of the varieties of fire resistive coating for steel structure. A total of 150 fire resistive coating samples were divided into calibration set and validation set randomly, the calibration set had 125 samples with 25 samples of each variety, and the validation set had 25 samples with 5 samples of each variety. According to the principal component scores of unknown samples, Mahalanobis distance values between each variety and unknown samples were calculated to realize the discrimination of different varieties. The qualitative analysis model for external verification of unknown samples is a 10% recognition ration. The results demonstrated that this identification method can be used as a rapid, accurate method to identify the classification of fire resistive coating for steel structure and provide technical reference for market regulation.

  14. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    NASA Astrophysics Data System (ADS)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  15. Automatic classification of spectral units in the Aristarchus plateau

    NASA Astrophysics Data System (ADS)

    Erard, S.; Le Mouelic, S.; Langevin, Y.

    1999-09-01

    A reduction scheme has been recently proposed for the NIR images of Clementine (Le Mouelic et al, JGR 1999). This reduction has been used to build an integrated UVvis-NIR image cube of the Aristarchus region, from which compositional and maturity variations can be studied (Pinet et al, LPSC 1999). We will present an analysis of this image cube, providing a classification in spectral types and spectral units. The image cube is processed with Gmode analysis using three different data sets: Normalized spectra provide a classification based mainly on spectral slope variations (ie. maturity and volcanic glasses). This analysis discriminates between craters plus ejecta, mare basalts, and DMD. Olivine-rich areas and Aristarchus central peak are also recognized. Continuum-removed spectra provide a classification more related to compositional variations, which correctly identifies olivine and pyroxenes-rich areas (in Aristarchus, Krieger, Schiaparelli\\ldots). A third analysis uses spectral parameters related to maturity and Fe composition (reflectance, 1 mu m band depth, and spectral slope) rather than intensities. It provides the most spatially consistent picture, but fails in detecting Vallis Schroeteri and DMDs. A supplementary unit, younger and rich in pyroxene, is found on Aristarchus south rim. In conclusion, Gmode analysis can discriminate between different spectral types already identified with more classic methods (PCA, linear mixing\\ldots). No previous assumption is made on the data structure, such as endmembers number and nature, or linear relationship between input variables. The variability of the spectral types is intrinsically accounted for, so that the level of analysis is always restricted to meaningful limits. A complete classification should integrate several analyses based on different sets of parameters. Gmode is therefore a powerful light toll to perform first look analysis of spectral imaging data. This research has been partly founded by the French Programme National de Planetologie.

  16. Cost and cost-effectiveness of nationwide school-based helminth control in Uganda

    PubMed Central

    BROOKER, SIMON; KABATEREINE, NARCIS B; FLEMING, FIONA; DEVLIN, NANCY

    2009-01-01

    Estimates of cost and cost-effectiveness are typically based on a limited number of small-scale studies with no investigation of the existence of economies to scale or intra-country variation in cost and cost-effectiveness. This information gap hinders the efficient allocation of health care resources and the ability to generalize estimates to other settings. The current study investigates the intra-country variation in the cost and cost-effectiveness of nationwide school-based treatment of helminth (worm) infection in Uganda. Programme cost data were collected through semi-structured interviews with districts officials and from accounting records in six of the 23 intervention districts. Both financial and economic costs were assessed. Costs were estimated on the basis of cost in US$ per schoolchild treated and an incremental cost effectiveness ratio (cost in US$ per case of anaemia averted) was used to evaluate programme cost-effectiveness. Sensitivity analysis was performed to assess the effect of discount rate and drug price. The overall economic cost per child treated in the six districts was US$ 0.54 and the cost-effectiveness was US$ 3.19 per case of anaemia averted. Analysis indicated that estimates of both cost and cost-effectiveness differ markedly with the total number of children which received treatment, indicating economies of scale. There was also substantial variation between districts in the cost per individual treated (US$ 0.41-0.91) and cost per anaemia case averted (US$ 1.70-9.51). Independent variables were shown to be statistically associated with both sets of estimates. This study highlights the potential bias in transferring data across settings without understanding the nature of observed variations. PMID:18024966

  17. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    PubMed

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  18. The prevalence of stillbirths: a systematic review

    PubMed Central

    Say, Lale; Donner, Allan; Gülmezoglu, A Metin; Taljaard, Monica; Piaggio, Gilda

    2006-01-01

    Background Stillbirth rate is an important indicator of access to and quality of antenatal and delivery care. Obtaining overall estimates across various regions of the world is not straightforward due to variation in definitions, data collection methods and reporting. Methods We conducted a systematic review of a range of pregnancy-related conditions including stillbirths and performed meta-analysis of the subset of studies reporting stillbirth rates. We examined variation across rates and used meta-regression techniques to explain observed variation. Results We identified 389 articles on stillbirth prevalence among the 2580 included in the systematic review. We included 70 providing 80 data sets from 50 countries in the meta-analysis. Pooled prevalence rates show variation across various subgroup categories. Rates per 100 births are higher in studies conducted in less developed country settings as compared to more developed (1.17 versus 0.50), of inadequate quality as compared to adequate (1.12 versus 0.66), using sub-national sample as compared to national (1.38 versus 0.68), reporting all stillbirths as compared to late stillbirths (0.95 versus 0.63), published in non-English as compared to English (0.91 versus 0.59) and as journal articles as compared to non-journal (1.37 versus 0.67). The results of the meta-regression show the significance of two predictor variables – development status of the setting and study quality – on stillbirth prevalence. Conclusion Stillbirth prevalence at the community level is typically less than 1% in more developed parts of the world and could exceed 3% in less developed regions. Regular reviews of stillbirth rates in appropriately designed and reported studies are useful in monitoring the adequacy of care. Systematic reviews of prevalence studies are helpful in explaining sources of variation across rates. Exploring these methodological issues will lead to improved standards for assessing the burden of reproductive ill-health. PMID:16401351

  19. A decorated raven bone from the Zaskalnaya VI (Kolosovskaya) Neanderthal site, Crimea

    PubMed Central

    Evans, Sarah; Stepanchuk, Vadim; Tsvelykh, Alexander; d’Errico, Francesco

    2017-01-01

    We analyze a radius bone fragment of a raven (Corvus corax) from Zaskalnaya VI rock shelter, Crimea. The object bears seven notches and comes from an archaeological level attributed to a Micoquian industry dated to between 38 and 43 cal kyr BP. Our study aims to examine the degree of regularity and intentionality of this set of notches through their technological and morphometric analysis, complemented by comparative experimental work. Microscopic analysis of the notches indicate that they were produced by the to-and-fro movement of a lithic cutting edge and that two notches were added to fill in the gap left between previously cut notches, probably to increase the visual consistency of the pattern. Multivariate analysis of morphometric data recorded on the archaeological notches and sets of notches cut by nine modern experimenters on radii of domestic turkeys shows that the variations recorded on the Zaskalnaya set are comparable to experimental sets made with the aim of producing similar, parallel, equidistant notches. Identification of the Weber Fraction, the constant that accounts for error in human perception, for equidistant notches cut on bone rods and its application to the Zaskalnaya set of notches and thirty-six sets of notches incised on seventeen Upper Palaeolithic bone objects from seven sites indicate that the Zaskalnaya set falls within the range of variation of regularly spaced experimental and Upper Palaeolithic sets of notches. This suggests that even if the production of the notches may have had a utilitarian reason the notches were made with the goal of producing a visually consistent pattern. This object represents the first instance of a bird bone from a Neanderthal site bearing modifications that cannot be explained as the result of butchery activities and for which a symbolic argument can be built on direct rather than circumstantial evidence. PMID:28355292

  20. Sources of Variance in Baseline Gene Expression in the Rodent Liver

    EPA Science Inventory

    The use of gene expression profiling in both clinical and laboratory settings would be enhanced by better characterization of variation due to individual, environmental, and technical factors. Analysis of microarray data from untreated or vehicle-treated animals within the contro...

  1. A factor analysis of landscape pattern and structure metrics

    Treesearch

    Kurt H. Riitters; R.V. O' Neill; C.T. Hunsaker; James D. Wickham; D.H. Yankee; S.P. Timmins; K.B. Jones; B.L. Jackson

    1995-01-01

    Fifty-five metrics of landscape pattern and structure were calculated for 85 maps of land use and land cover. A multivariate factor analysis was used to identify the common axes (or dimensions) of pattern and structure which were measured by a reduced set of 26 metrics. The first six factors explained about 87% of the variation in the 26 landscape metrics. These...

  2. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.

  3. Relation of physical and chemical characteristics of streams to fish communities in the Red River of the North basin, Minnesota and North Dakota, 1993-95

    USGS Publications Warehouse

    Goldstein, R.M.; Stauffer, J.C.; Larson, P.R.; Lorenz, D.L.

    1996-01-01

    Within the instream habitat data set, measures of habitat volume (channel width and depth) and habitat diversity were most significant in explaining the variability of the fish communities. The amount of nonagricultural land and riparian zone integrity from the terrestrial habitat data set were also useful in explaining fish community composition. Variability of mean monthly discharge and the frequency of high and low discharge events during the three years prior to fish sampling were the most influential of the hydrologic variables.The first two axes of the canonical correspondence analysis accounted for 43.3 percent of the variation in the fish community and 52.5 percent of the variation in the environmental-species relation. Water-quality indicators such as the percent of fine material in suspended sediment, minimum dissolved oxygen concentrations, minimum concentrations of dissolved organic carbon, and the range of concentrations of major ions and nutrients were the variables that were most important in the canonical correspondence analysis of water-quality data with fish. No single environmental variable or data set appeared to be more important than another in explaining variation in the fish community. The environmental factors affecting the fish communities of the Red River of the North are interrelated. For the most part, instream environmental conditions (instream habitat, hydrology, and water chemistry) appear to be more important in explaining variability in fish community composition than factors related to the agricultural nature of the basin.

  4. Instantaneous phase estimation to measure weak velocity variations: application to noise correlation on seismic data at the exploration scale

    NASA Astrophysics Data System (ADS)

    Corciulo, M.; Roux, P.; Campillo, M.; Dubucq, D.

    2010-12-01

    Passive imaging from noise cross-correlation is a consolidated analysis applied at continental and regional scale whereas its use at local scale for seismic exploration purposes is still uncertain. The development of passive imaging by cross-correlation analysis is based on the extraction of the Green’s function from seismic noise data. In a completely random field in time and space, the cross-correlation permits to retrieve the complete Green’s function whatever the complexity of the medium. At the exploration scale and at frequency above 2 Hz, the noise sources are not ideally distributed around the stations which strongly affect the extraction of the direct arrivals from the noise cross-correlation process. In order to overcome this problem, the coda waves extracted from noise correlation could be useful. Coda waves describe long and scattered paths sampling the medium in different ways such that they become sensitive to weak velocity variations without being dependent on the noise source distribution. Indeed, scatters in the medium behave as a set of secondary noise sources which randomize the spatial distribution of noise sources contributing to the coda waves in the correlation process. We developed a new technique to measure weak velocity changes based on the computation of the local phase variations (instantaneous phase variation or IPV) of the cross-correlated signals. This newly-developed technique takes advantage from the doublet and stretching techniques classically used to monitor weak velocity variation from coda waves. We apply IPV to data acquired in Northern America (Canada) on a 1-km side square seismic network laid out by 397 stations. Data used to study temporal variations are cross-correlated signals computed on 10-minutes ambient noise in the frequency band 2-5 Hz. As the data set was acquired over five days, about 660 files are processed to perform a complete temporal analysis for each stations pair. The IPV permits to estimate the phase shift all over the signal length without any assumption on the medium velocity. The instantaneous phase is computed using the Hilbert transform of the signal. For each stations pair, we measure the phase difference between successive correlation functions calculated for 10 minutes of ambient noise. We then fit the instantaneous phase shift using a first-order polynomial function. The measure of the velocity variation corresponds to the slope of this fit. Compared to other techniques, the advantage of IPV is a very fast procedure which efficiently provides the measure of velocity variation on large data sets. Both experimental results and numerical tests on synthetic signals will be presented to assess the reliability of the IPV technique, with comparison to the doublet and stretching methods.

  5. ADM Analysis of gravity models within the framework of bimetric variational formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golovnev, Alexey; Karčiauskas, Mindaugas; Nyrhinen, Hannu J., E-mail: agolovnev@yandex.ru, E-mail: mindaugas.karciauskas@helsinki.fi, E-mail: hannu.nyrhinen@helsinki.fi

    2015-05-01

    Bimetric variational formalism was recently employed to construct novel bimetric gravity models. In these models an affine connection is generated by an additional tensor field which is independent of the physical metric. In this work we demonstrate how the ADM decomposition can be applied to study such models and provide some technical intermediate details. Using ADM decomposition we are able to prove that a linear model is unstable as has previously been indicated by perturbative analysis. Moreover, we show that it is also very difficult if not impossible to construct a non-linear model which is ghost-free within the framework ofmore » bimetric variational formalism. However, we demonstrate that viable models are possible along similar lines of thought. To this end, we consider a set up in which the affine connection is a variation of the Levi-Civita one. As a proof of principle we construct a gravity model with a massless scalar field obtained this way.« less

  6. Representation of vegetation by continental data sets derived from NOAA-AVHRR data

    NASA Technical Reports Server (NTRS)

    Justice, C. O.; Townshend, J. R. G.; Kalb, V. L.

    1991-01-01

    Images of the normalized difference vegetation index (NDVI) are examined with specific attention given to the effect of spatial scales on the understanding of surface phenomena. A scale variance analysis is conducted on NDVI annual and seasonal images of Africa taken from 1987 NOAA-AVHRR data at spatial scales ranging from 8-512 km. The scales at which spatial variation takes place are determined and the relative magnitude of the variations are considered. Substantial differences are demonstrated, notably an increase in spatial variation with coarsening spatial resolution. Different responses in scale variance as a function of spatial resolution are noted in an analysis of maximum value composites for February and September; the difference is most marked in areas with very seasonal vegetation. The spatial variation at different scales is attributed to different factors, and methods involving the averaging of areas of transition and surface heterogeneity can oversimplify surface conditions. The spatial characteristics and the temporal variability of areas should be considered to accurately apply satellite data to global models.

  7. Analysis of the optimal laminated target made up of discrete set of materials

    NASA Technical Reports Server (NTRS)

    Aptukov, Valery N.; Belousov, Valentin L.

    1991-01-01

    A new class of problems was analyzed to estimate an optimal structure of laminated targets fabricated from the specified set of homogeneous materials. An approximate description of the perforation process is based on the model of radial hole extension. The problem is solved by using the needle-type variation technique. The desired optimization conditions and quantitative/qualitative estimations of optimal targets were obtained and are discussed using specific examples.

  8. Annual variations of water vapor in the stratosphere and upper troposphere observed by the Stratospheric Aerosol and Gas Experiment II

    NASA Technical Reports Server (NTRS)

    Mccormick, M. P.; Chiou, E. W.; Mcmaster, L. R.; Chu, W. P.; Larsen, J. C.; Rind, D.; Oltmans, S.

    1993-01-01

    Data collected by the Stratospheric Aerosol and Gas Experiment II are presented, showing annual variations of water vapor in the stratosphere and the upper troposphere. The altitude-time cross sections of water vapor were found to exhibit annually repeatable patterns in both hemispheres, with a yearly minimum in water vapor appearing in both hemispheres at about the same time, supporting the concept of a common source for stratospheric dry air. A linear regression analysis was applied to the three-year data set to elucidate global values and variations of water vapor ratio.

  9. Variation in sulfur dioxide emissions related to earth tides, Halemaumau crater, Kilauea volcano, Hawaii

    NASA Technical Reports Server (NTRS)

    Connor, Charles B.; Stoiber, Richard E.; Malinconico, Lawrence L., Jr.

    1988-01-01

    Variation in SO2 emissions from Halemaumau crater, Kilauea volcano, Hawaii is analyzed using a set of techniques known as exploratory data analysis. SO2 flux was monitored using a correlation spectrometer. A total of 302 measurements were made on 73 days over a 90-day period. The mean flux was 171 t/d with a standard deviation of 52 t/d. A significant increase in flux occurs during increased seismic activity beneath the caldera. SO2 flux prior to this change varies in a systematic way and may be related to variation in the tidal modulation envelope.

  10. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  11. Sources of variation in baseline gene expression levels from toxicogenomics study control animals

    EPA Science Inventory

    The use of gene expression profiling in both clinical and laboratory settings would be enhanced by better characterization ofvariance due to individual, environmental, and technical factors. Meta-analysis ofmicroarray data from untreated or vehicle-treated animals within the con...

  12. Relativistic effects of spacecraft with circumnavigating observer

    NASA Astrophysics Data System (ADS)

    Shanklin, Nathaniel; West, Joseph

    A variation of the recently introduced Trolley Paradox, itself is a variation of the Ehrenfest Paradox is presented. In the Trolley Paradox, a ``stationary'' set of observers tracking a wheel rolling with a constant velocity find that the wheel travels further than its rest length circumference during one revolution of the wheel, despite the fact that the Lorentz contracted circumference is less than its rest value. In the variation presented, a rectangular spacecraft with onboard observers moves with constant velocity and is circumnavigated by several small ``sloops'' forming teams of inertial observers. This whole precession moves relative to a set of ``stationary'' Earth observers. Two cases are presented, one in which the sloops are evenly spaced according to the spacecraft observers, and one in which the sloops are evenly spaced according to the Earth observes. These two cases, combined with the rectangular geometry and an emphasis on what is seen by, and what is measured by, each set of observers is very helpful in sorting out the apparent contradictions. To aid in the visualizations stationary representations in excel along with animation in Visual Python and Unity are presented. The analysis presented is suitable for undergraduate physics majors.

  13. Synergies in the space of control variables within the equilibrium-point hypothesis.

    PubMed

    Ambike, S; Mattos, D; Zatsiorsky, V M; Latash, M L

    2016-02-19

    We use an approach rooted in the recent theory of synergies to analyze possible co-variation between two hypothetical control variables involved in finger force production based on the equilibrium-point (EP) hypothesis. These control variables are the referent coordinate (R) and apparent stiffness (C) of the finger. We tested a hypothesis that inter-trial co-variation in the {R; C} space during repeated, accurate force production trials stabilizes the fingertip force. This was expected to correspond to a relatively low amount of inter-trial variability affecting force and a high amount of variability keeping the force unchanged. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers during force production tasks. Across trials, R and C showed strong co-variation with the data points lying close to a hyperbolic curve. Hyperbolic regressions accounted for over 99% of the variance in the {R; C} space. Another analysis was conducted by randomizing the original {R; C} data sets and creating surrogate data sets that were then used to compute predicted force values. The surrogate sets always showed much higher force variance compared to the actual data, thus reinforcing the conclusion that finger force control was organized in the {R; C} space, as predicted by the EP hypothesis, and involved co-variation in that space stabilizing total force. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Synergies in the space of control variables within the equilibrium-point hypothesis

    PubMed Central

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2015-01-01

    We use an approach rooted in the recent theory of synergies to analyze possible co-variation between two hypothetical control variables involved in finger force production based in the equilibrium-point hypothesis. These control variables are the referent coordinate (R) and apparent stiffness (C) of the finger. We tested a hypothesis that inter-trial co-variation in the {R; C} space during repeated, accurate force production trials stabilizes the fingertip force. This was expected to correspond to a relatively low amount of inter-trial variability affecting force and a high amount of variability keeping the force unchanged. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers during force production tasks. Across trials, R and C showed strong co-variation with the data points lying close to a hyperbolic curve. Hyperbolic regressions accounted for over 99% of the variance in the {R; C} space. Another analysis was conducted by randomizing the original {R; C} data sets and creating surrogate data sets that were then used to compute predicted force values. The surrogate sets always showed much higher force variance compared to the actual data, thus reinforcing the conclusion that finger force control was organized in the {R; C} space, as predicted by the equilibrium-point hypothesis, and involved co-variation in that space stabilizing total force. PMID:26701299

  15. Variation in Care of Inflammatory Bowel Diseases Patients in Crohn's and Colitis Foundation of America Partners: Role of Gastroenterologist Practice Setting in Disease Outcomes and Quality Process Measures.

    PubMed

    Weaver, Kimberly N; Kappelman, Michael D; Sandler, Robert S; Martin, Christopher F; Chen, Wenli; Anton, Kristen; Long, Millie D

    2016-11-01

    As variation in care has previously been linked to quality, we aimed to describe variations in inflammatory bowel diseases care by gastroenterology (GI) practice setting. We performed a cross-sectional study within the Crohn's and Colitis Foundation of America Partners and used bivariate analyses to compare patient characteristics by GI practice setting (GI-academic [GIA], GI-private, or GI-other). Regression models were used to describe the effects of provider type on steroid use, disease activity, and the quality of life. The study included 12,083 patients with inflammatory bowel diseases (7576 with Crohn's disease [CD] and 4507 with ulcerative colitis [UC]). Nearly 95% reported visiting a GI provider annually. Also, CD patients seen by GIA were younger, better educated, used less 5-aminosalicylate agents, and had higher biologic and immunomodulator use (P < 0.001 for all). On multivariate analysis of CD patients, GIA used less steroids when compared with GI-private (odds ratio, 0.84; 95% confidence interval, 0.67-1.06) or GI-other (odds ratio, 0.66; 95% confidence interval, 0.49-0.89). GIA patients were more likely to be in remission, have flu vaccine, and have better quality of life. UC patients seen by GIA were younger, had more hospitalizations, and previous surgery (P < 0.001 for all). No differences existed for steroid use, remission, flu vaccine, or quality of life for UC care on bivariate or multivariate analyses. Significant variations in care patterns and quality measures exist for CD across GI provider types, without similar variation in UC care. Interventions to reduce variations in care could improve the quality of care in CD.

  16. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    NASA Technical Reports Server (NTRS)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  17. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    PubMed

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. On-line prediction of yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score using the MARC beef carcass image analysis system.

    PubMed

    Shackelford, S D; Wheeler, T L; Koohmaraie, M

    2003-01-01

    The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.

  19. Normalized Polarization Ratios for the Analysis of Cell Polarity

    PubMed Central

    Shimoni, Raz; Pham, Kim; Yassin, Mohammed; Ludford-Menting, Mandy J.; Gu, Min; Russell, Sarah M.

    2014-01-01

    The quantification and analysis of molecular localization in living cells is increasingly important for elucidating biological pathways, and new methods are rapidly emerging. The quantification of cell polarity has generated much interest recently, and ratiometric analysis of fluorescence microscopy images provides one means to quantify cell polarity. However, detection of fluorescence, and the ratiometric measurement, is likely to be sensitive to acquisition settings and image processing parameters. Using imaging of EGFP-expressing cells and computer simulations of variations in fluorescence ratios, we characterized the dependence of ratiometric measurements on processing parameters. This analysis showed that image settings alter polarization measurements; and that clustered localization is more susceptible to artifacts than homogeneous localization. To correct for such inconsistencies, we developed and validated a method for choosing the most appropriate analysis settings, and for incorporating internal controls to ensure fidelity of polarity measurements. This approach is applicable to testing polarity in all cells where the axis of polarity is known. PMID:24963926

  20. Space-based surface wind vectors to aid understanding of air-sea interactions

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Bloom, S. C.; Hoffman, R. N.; Ardizzone, J. V.; Brin, G.

    1991-01-01

    A novel and unique ocean-surface wind data-set has been derived by combining the Defense Meteorological Satellite Program Special Sensor Microwave Imager data with additional conventional data. The variational analysis used generates a gridded surface wind analysis that minimizes an objective function measuring the misfit of the analysis to the background, the data, and certain a priori constraints. In the present case, the European Center for Medium-Range Weather Forecasts surface-wind analysis is used as the background.

  1. Status and Plans for the WCRP/GEWEX Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adkerm Robert F.

    2006-01-01

    Status and plans for GPCP are presented along with scientific findings from the current data set. Global and large regional rainfall variations and possible long-term changes are examined using the 26-year (1979-2004) monthly dataset from the Global Precipitation Climatology Project (GPCP). One emphasis is to discriminate among the variations due to ENSO, volcanic events and possible long-term changes. Although the global change of precipitation in the data set is near zero, the data set does indicate an upward trend (0.13 mm/day/25yr) and a downward trend (-0.06 mm/day/25yr) over tropical oceans and lands (25S-25N), respectively. This corresponds to a 4% increase (ocean) and 2% decrease (land) during this time period. Simple techniques are derived to attempt to eliminate variations due to ENSO and major volcanic eruptions in the Tropics. Using only annual values two "volcano years" are determined by examining ocean-land coupled variations in precipitation related to ENSO and other phenomena. The outlier years coincide with Pinatubo and El Chicon eruptions. The ENSO signal is reduced by deriving mean ocean and land values for El Nino, La Nina and neutral conditions based on Nino 3.4 SST and normalizing the annual ocean and land precipitation to the neutral set of cases. The impact of the two major volcanic eruptions over the past 25 years is estimated to be about a 5% reduction in tropical rainfall. The modified data set (with ENSO and volcano effect at least partially removed) retains the same approximate linear change slopes over the data set period, but with reduced variance leading to significance tests with results in the 90-95% range. Intercomparisons between the GPCP, SSM/I (1988-2004), and TRMM (1998-2004) satellite rainfall products and alternate gauge analyses over land are made to attempt to increase or decrease confidence in the changes seen in the GPCP analysis.

  2. Integrating single-cell transcriptomic data across different conditions, technologies, and species.

    PubMed

    Butler, Andrew; Hoffman, Paul; Smibert, Peter; Papalexi, Efthymia; Satija, Rahul

    2018-06-01

    Computational single-cell RNA-seq (scRNA-seq) methods have been successfully applied to experiments representing a single condition, technology, or species to discover and define cellular phenotypes. However, identifying subpopulations of cells that are present across multiple data sets remains challenging. Here, we introduce an analytical strategy for integrating scRNA-seq data sets based on common sources of variation, enabling the identification of shared populations across data sets and downstream comparative analysis. We apply this approach, implemented in our R toolkit Seurat (http://satijalab.org/seurat/), to align scRNA-seq data sets of peripheral blood mononuclear cells under resting and stimulated conditions, hematopoietic progenitors sequenced using two profiling technologies, and pancreatic cell 'atlases' generated from human and mouse islets. In each case, we learn distinct or transitional cell states jointly across data sets, while boosting statistical power through integrated analysis. Our approach facilitates general comparisons of scRNA-seq data sets, potentially deepening our understanding of how distinct cell states respond to perturbation, disease, and evolution.

  3. Development of global model for atmospheric backscatter at CO2 wavelengths

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Wang, P. H.; Farrukh, U.; Deepak, A.; Patterson, E. M.

    1985-01-01

    The improvement of an understanding of the variation of the aerosol backscattering at 10.6 micron within the free troposphere and the development model to describe this was undertaken. The analysis combines theoretical modeling with the results contained within three independent data sets. The data sets are obtained by the SAGE I/SAM II satellite experiments, the GAMETAG flight series and by direct backscatter measurements. The theoretical work includes use of a bimodal, two component aerosol model, and the study of the microphysical and associated optical changes occurring within an aerosol plume. A consistent picture is obtained, which describes the variation of the aerosol backscattering function in the free troposphere with altitude, latitude, and season. Most data are available and greatest consistency is found inside the Northern Hemisphere.

  4. Discrimination of Bacillus anthracis from closely related microorganisms by analysis of 16S and 23S rRNA with oligonucleotide microchips

    DOEpatents

    Bavykin, Sergei G.; Mirzabekova, legal representative, Natalia V.; Mirzabekov, deceased, Andrei D.

    2007-12-04

    The present invention relates to methods and compositions for using nucleotide sequence variations of 16S and 23S rRNA within the B. cereus group to discriminate a highly infectious bacterium B. anthracis from closely related microorganisms. Sequence variations in the 16S and 23S rRNA of the B. cereus subgroup including B. anthracis are utilized to construct an array that can detect these sequence variations through selective hybridizations and discriminate B. cereus group that includes B. anthracis. Discrimination of single base differences in rRNA was achieved with a microchip during analysis of B. cereus group isolates from both single and in mixed samples, as well as identification of polymorphic sites. Successful use of a microchip to determine the appropriate subgroup classification using eight reference microorganisms from the B. cereus group as a study set, was demonstrated.

  5. Analysis of regolith electromagnetic scattering as constrained by high resolution Earth-based measurements of the lunar microwave emission

    NASA Technical Reports Server (NTRS)

    Keihm, S. J.

    1983-01-01

    When high resolution measurements of the phase variation of the lunar disk center brightness temperature revealed that in situ regolith electrical losses were larger than those measured on returned samples by a factor of 1.5 to 2.0 at centimeter wavelengths, the need for a refinement of the regolith model to include realistic treatment of scattering effects was identified. Two distinct scattering regimes are considered: vertial variations in dielectric constant and volume scattering due to subsurface rock fragments. Models of lunar regolith energy transport processes are now at the state for which a maximum scientific return could be realized from a lunar orbiter microwave mapping experiment. A detailed analysis, including the effects of scattering produced a set of nominal brightness temperature spectra for lunar equatorial regions, which can be used for mapping as a calibration reference for mapping variations in mineralogy and heat flow.

  6. Characterization of Lake Michigan coastal lakes using zooplankton assemblages

    USGS Publications Warehouse

    Whitman, Richard L.; Nevers, Meredith B.; Goodrich, Maria L.; Murphy, Paul C.; Davis, Bruce M.

    2004-01-01

    Zooplankton assemblages and water quality were examined bi-weekly from 17 April to 19 October 1998 in 11 northeastern Lake Michigan coastal lakes of similar origin but varied in trophic status and limnological condition. All lakes were within or adjacent to Sleeping Bear Dunes National Lakeshore, Michigan. Zooplankton (principally microcrustaceans and rotifers) from triplicate Wisconsin net (80 I?m) vertical tows taken at each lake's deepest location were analyzed. Oxygen-temperature-pH-specific conductivity profiles and surface water quality were concurrently measured. Bray-Curtis similarity analysis showed small variations among sample replicates but large temporal differences. The potential use of zooplankton communities for environmental lake comparisons was evaluated by means of BIOENV (Primer 5.1) and principal component analyses. Zooplankton analyzed at the lowest identified taxonomic level yielded greatest sensitivity to limnological variation. Taxonomic and ecological aggregations of zooplankton data performed comparably, but less well than the finest taxonomic analysis. Secchi depth, chlorophyll a, and sulfate concentrations combined to give the best correlation with patterns of variation in the zooplankton data set. Principal component analysis of these variables revealed trophic status as the most influential major limnological gradient among the study lakes. Overall, zooplankton abundance was an excellent indicator of variation in trophic status.

  7. Temporal patterns and source apportionment of nitrate-nitrogen leaching in a paddy field at Kelantan, Malaysia.

    PubMed

    Hussain, Hazilia; Yusoff, Mohd Kamil; Ramli, Mohd Firuz; Abd Latif, Puziah; Juahir, Hafizan; Zawawi, Mohamed Azwan Mohammed

    2013-11-15

    Nitrate-nitrogen leaching from agricultural areas is a major cause for groundwater pollution. Polluted groundwater with high levels of nitrate is hazardous and cause adverse health effects. Human consumption of water with elevated levels of NO3-N has been linked to the infant disorder methemoglobinemia and also to non-Hodgkin's disease lymphoma in adults. This research aims to study the temporal patterns and source apportionment of nitrate-nitrogen leaching in a paddy soil at Ladang Merdeka Ismail Mulong in Kelantan, Malaysia. The complex data matrix (128 x 16) of nitrate-nitrogen parameters was subjected to multivariate analysis mainly Principal Component Analysis (PCA) and Discriminant Analysis (DA). PCA extracted four principal components from this data set which explained 86.4% of the total variance. The most important contributors were soil physical properties confirmed using Alyuda Forecaster software (R2 = 0.98). Discriminant analysis was used to evaluate the temporal variation in soil nitrate-nitrogen on leaching process. Discriminant analysis gave four parameters (hydraulic head, evapotranspiration, rainfall and temperature) contributing more than 98% correct assignments in temporal analysis. DA allowed reduction in dimensionality of the large data set which defines the four operating parameters most efficient and economical to be monitored for temporal variations. This knowledge is important so as to protect the precious groundwater from contamination with nitrate.

  8. Learn Locally, Act Globally: Learning Language from Variation Set Cues

    PubMed Central

    Onnis, Luca; Waterfall, Heidi R.; Edelman, Shimon

    2011-01-01

    Variation set structure — partial overlap of successive utterances in child-directed speech — has been shown to correlate with progress in children’s acquisition of syntax. We demonstrate the benefits of variation set structure directly: in miniature artificial languages, arranging a certain proportion of utterances in a training corpus in variation sets facilitated word and phrase constituent learning in adults. Our findings have implications for understanding the mechanisms of L1 acquisition by children, and for the development of more efficient algorithms for automatic language acquisition, as well as better methods for L2 instruction. PMID:19019350

  9. Solar wind variations in the 60-100 year period range: A review

    NASA Technical Reports Server (NTRS)

    Feynman, J.

    1983-01-01

    The evidence for and against the reality of a solar wind variation in the period range of 60-100 years is reexamined. Six data sets are reviewed; sunspot numbers, geomagnetic variations, two auroral data sets and two (14)C data sets. These data are proxies for several different aspects of the solar wind and the presence or absence of 60-100 year cyclic behavior in a particular data set does not necessarily imply the presence or absence of this variation in other sets. It was concluded that two different analyses of proxy data for a particular characteristic of the heliospheric solar wind yielded conflicting results. This conflict can be resolved only by future research. It is also definitely confirmed that proxy data for the solar wind in the ecliptic at 1 A.U. undergo a periodic variation with a period of approximately 87 years. The average amplitude and phase of this variation as seen in eleven cycles of proxy data are presented.

  10. Methods of determining complete sensor requirements for autonomous mobility

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2012-01-01

    A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.

  11. Quasi-Global Precipitation as Depicted in the GPCPV2.2 and TMPA V7

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric J.; Adler, Robert F.

    2012-01-01

    After a lengthy incubation period, the year 2012 saw the release of the Global Precipitation Climatology Project (GPCP) Version 2.2 monthly dataset and the TRMM Multi-satellite Precipitation Analysis (TMPA) Version 7. One primary feature of the new data sets is that DMSP SSMIS data are now used, which entailed a great deal of development work to overcome calibration issues. In addition, the GPCP V2.2 included a slight upgrade to the gauge analysis input datasets, particularly over China, while the TMPA V7 saw more-substantial upgrades: 1) The gauge analysis record in Version 6 used the (older) GPCP monitoring product through April 2005 and the CAMS analysis thereafter, which introduced an inhomogeneity. Version 7 uses the Version 6 GPCC Full analysis, switching to the Version 4 Monitoring analysis thereafter. 2) The inhomogeneously processed AMSU record in Version 6 is uniformly processed in Version 7. 3) The TMI and SSMI input data have been upgraded to the GPROF2010 algorithm. The global-change, water cycle, and other user communities are acutely interested in how these data sets compare, as consistency between differently processed, long-term, quasi-global data sets provides some assurance that the statistics computed from them provide a good representation of the atmosphere's behavior. Within resolution differences, the two data sets agree well over land as the gauge data (which tend to dominate the land results) are the same in both. Over ocean the results differ more because the satellite products used for calibration are based on very different algorithms and the dominant input data sets are different. The time series of tropical (30 N-S) ocean average precipitation shows that the TMPA V7 follows the TMI-PR Combined Product calibrator, although running approximately 5% higher on average. The GPCP and TMPA time series are fairly consistent, although the GPCP runs approximately 10% lower than the TMPA, and has a somewhat larger interannual variation. As well, the GPCP and TMPA interannual variations have an apparent phase shift, with GPCP running a few months later. Additional diagnostics will include mean maps and selected scatter plots.

  12. Lateral variations in geologic structure and tectonic setting from remote sensing data

    NASA Astrophysics Data System (ADS)

    Alexander, S. S.

    1983-05-01

    The principal objective of this study was: (1) to assess the usefulness of remote sensing digital imagery, principally LANDSAT multispectral scanning (MSS) data, for inferring lateral variations in geologic structure and tectonic setting; and (2) to determine the extent to which these inferred variations correlate with observed variations in seismic excitation from underground nuclear explosion test sites in the Soviet Union. Soviet, French and U.S. test sites have been investigated to compare their geologic and tectonic responses as seen by LANDSAT. The characteristics of "granite' intrusive bodies exposed at Semipalatinsk (Degelen), North Africa (Hoggar), NTS (Climax stock), and an analog site in Maine (Mt. Katahdin), have been studied in detail. The tectonic stress field inferred from the tectonic release portion of seismic signatures of explosions in these three areas is compared with local and regional fracture patterns discernable from imagery. The usefulness of satellite synthetic aperture radar (SAR) to determine geologic conditions and delineate fault (fracture) patterns is demonstrated by the analysis of SEASAT data for an area in the eastern United States. Algorithms to enhance structural boundaries and to use textures to identify rock types were developed and applied to several test sites.

  13. Metabolite profiling and quantitative genetics of natural variation for flavonoids in Arabidopsis

    PubMed Central

    Routaboul, Jean-Marc; Dubos, Christian; Beck, Gilles; Marquis, Catherine; Bidzinski, Przemyslaw; Loudet, Olivier; Lepiniec, Loïc

    2012-01-01

    Little is known about the range and the genetic bases of naturally occurring variation for flavonoids. Using Arabidopsis thaliana seed as a model, the flavonoid content of 41 accessions and two recombinant inbred line (RIL) sets derived from divergent accessions (Cvi-0×Col-0 and Bay-0×Shahdara) were analysed. These accessions and RILs showed mainly quantitative rather than qualitative changes. To dissect the genetic architecture underlying these differences, a quantitative trait locus (QTL) analysis was performed on the two segregating populations. Twenty-two flavonoid QTLs were detected that accounted for 11–64% of the observed trait variations, only one QTL being common to both RIL sets. Sixteen of these QTLs were confirmed and coarsely mapped using heterogeneous inbred families (HIFs). Three genes, namely TRANSPARENT TESTA (TT)7, TT15, and MYB12, were proposed to underlie their variations since the corresponding mutants and QTLs displayed similar specific flavonoid changes. Interestingly, most loci did not co-localize with any gene known to be involved in flavonoid metabolism. This latter result shows that novel functions have yet to be characterized and paves the way for their isolation. PMID:22442426

  14. Atlas warping for brain morphometry

    NASA Astrophysics Data System (ADS)

    Machado, Alexei M. C.; Gee, James C.

    1998-06-01

    In this work, we describe an automated approach to morphometry based on spatial normalizations of the data, and demonstrate its application to the analysis of gender differences in the human corpus callosum. The purpose is to describe a population by a reduced and representative set of variables, from which a prior model can be constructed. Our approach is rooted in the assumption that individual anatomies can be considered as quantitative variations on a common underlying qualitative plane. We can therefore imagine that a given individual's anatomy is a warped version of some referential anatomy, also known as an atlas. The spatial warps which transform a labeled atlas into anatomic alignment with a population yield immediate knowledge about organ size and shape in the group. Furthermore, variation within the set of spatial warps is directly related to the anatomic variation among the subjects. Specifically, the shape statistics--mean and variance of the mappings--for the population can be calculated in a special basis, and an eigendecomposition of the variance performed to identify the most significant modes of shape variation. The results obtained with the corpus callosum study confirm the existence of substantial anatomical differences between males and females, as reported in previous experimental work.

  15. Using Rasch Analysis to Explore What Students Learn about Probability Concepts

    ERIC Educational Resources Information Center

    Mahmud, Zamalia; Porter, Anne

    2015-01-01

    Students' understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is…

  16. Home advantage in high-level volleyball varies according to set number.

    PubMed

    Marcelino, Rui; Mesquita, Isabel; Palao Andrés, José Manuel; Sampaio, Jaime

    2009-01-01

    The aim of the present study was to identify the probability of winning each Volleyball set according to game location (home, away). Archival data was obtained from 275 sets in the 2005 Men's Senior World League and 65,949 actions were analysed. Set result (win, loss), game location (home, away), set number (first, second, third, fourth and fifth) and performance indicators (serve, reception, set, attack, dig and block) were the variables considered in this study. In a first moment, performance indicators were used in a logistic model of set result, by binary logistic regression analysis. After finding the adjusted logistic model, the log-odds of winning the set were analysed according to game location and set number. The results showed that winning a set is significantly related to performance indicators (Chisquare(18)=660.97, p<0.01). Analyses of log-odds of winning a set demonstrate that home teams always have more probability of winning the game than away teams, regardless of the set number. Home teams have more advantage at the beginning of the game (first set) and in the two last sets of the game (fourth and fifth sets), probably due to facilities familiarity and crowd effects. Different game actions explain these advantages and showed that to win the first set is more important to take risk, through a better performance in the attack and block, and to win the final set is important to manage the risk through a better performance on the reception. These results may suggest intra-game variation in home advantage and can be most useful to better prepare and direct the competition. Key pointsHome teams always have more probability of winning the game than away teams.Home teams have higher performance in reception, set and attack in the total of the sets.The advantage of home teams is more pronounced at the beginning of the game (first set) and in two last sets of the game (fourth and fifth sets) suggesting intra-game variation in home advantage.Analysis by sets showed that home teams have a better performance in the attack and block in the first set and in the reception in the third and fifth sets.

  17. The role of climate on inter-annual variation in stream nitrate fluxes and concentrations.

    PubMed

    Gascuel-Odoux, Chantal; Aurousseau, Pierre; Durand, Patrick; Ruiz, Laurent; Molenat, Jérôme

    2010-11-01

    In recent decades, temporal variations in nitrate fluxes and concentrations in temperate rivers have resulted from the interaction of anthropogenic and climatic factors. The effect of climatic drivers remains unclear, while the relative importance of the drivers seems to be highly site dependent. This paper focuses on 2-6 year variations called meso-scale variations, and analyses the climatic drivers of these variations in a study site characterized by high N inputs from intensive animal farming systems and shallow aquifers with impervious bedrock in a temperate climate. Three approaches are developed: 1) an analysis of long-term records of nitrate fluxes and nitrate concentrations in 30 coastal rivers of Western France, which were well-marked by meso-scale cycles in the fluxes and concentration with a slight hysteresis; 2) a test of the climatic control using a lumped two-box model, which demonstrates that hydrological assumptions are sufficient to explain these meso-scale cycles; and 3) a model of nitrate fluxes and concentrations in two contrasted catchments subjected to recent mitigation measures, which analyses nitrate fluxes and concentrations in relation to N stored in groundwater. In coastal rivers, hydrological drivers (i.e., effective rainfall), and particularly the dynamics of the water table and rather stable nitrate concentration, explain the meso-scale cyclic patterns. In the headwater catchment, agricultural and hydrological drivers can interact according to their settings. The requirements to better distinguish the effect of climate and human changes in integrated water management are addressed: long-term monitoring, coupling the analysis and the modelling of large sets of catchments incorporating different sizes, land uses and environmental factors. Copyright © 2009 Elsevier B.V. All rights reserved.

  18. Gene integrated set profile analysis: a context-based approach for inferring biological endpoints

    PubMed Central

    Kowalski, Jeanne; Dwivedi, Bhakti; Newman, Scott; Switchenko, Jeffery M.; Pauly, Rini; Gutman, David A.; Arora, Jyoti; Gandhi, Khanjan; Ainslie, Kylie; Doho, Gregory; Qin, Zhaohui; Moreno, Carlos S.; Rossi, Michael R.; Vertino, Paula M.; Lonial, Sagar; Bernal-Mizrachi, Leon; Boise, Lawrence H.

    2016-01-01

    The identification of genes with specific patterns of change (e.g. down-regulated and methylated) as phenotype drivers or samples with similar profiles for a given gene set as drivers of clinical outcome, requires the integration of several genomic data types for which an ‘integrate by intersection’ (IBI) approach is often applied. In this approach, results from separate analyses of each data type are intersected, which has the limitation of a smaller intersection with more data types. We introduce a new method, GISPA (Gene Integrated Set Profile Analysis) for integrated genomic analysis and its variation, SISPA (Sample Integrated Set Profile Analysis) for defining respective genes and samples with the context of similar, a priori specified molecular profiles. With GISPA, the user defines a molecular profile that is compared among several classes and obtains ranked gene sets that satisfy the profile as drivers of each class. With SISPA, the user defines a gene set that satisfies a profile and obtains sample groups of profile activity. Our results from applying GISPA to human multiple myeloma (MM) cell lines contained genes of known profiles and importance, along with several novel targets, and their further SISPA application to MM coMMpass trial data showed clinical relevance. PMID:26826710

  19. Temperature-Dependent Second Shell Interference in the First Shell Analysis of Crystalline InP X-ray Absorption Spectroscopy Data

    NASA Astrophysics Data System (ADS)

    Schnohr, Claudia S.; Araujo, Leandro L.; Ridgway, Mark C.

    2014-09-01

    Analysing only the first nearest neighbour (NN) scattering signal is a commonly used and often successful way to treat extended X-ray absorption fine structure data. However, using temperature-dependent measurements of InP as an example, we demonstrate how this approach can lead to erroneous first NN structural parameters in systems with a weak first but strong second NN scatterer. In such cases, particularly low temperature data may suffer from an overlap of first and second NN scattering signals caused by the Fourier transformation (FT) even if the dominant peaks appear to be well separated. The first NN structural parameters then vary as a function of the FT settings if only the first NN scattering contribution is considered in the analysis. Although this variation is small, it can also lead to significant differences in other calculated properties such as the Einstein temperature. We demonstrate that these variations can be avoided either by choosing an appropriate FT window or by including the scattering contributions of higher shells in the analysis. The latter is achieved by a path fitting approach and yields structural parameters independent of the FT settings used.

  20. Field data analysis of boar semen quality.

    PubMed

    Broekhuijse, M L W J; Feitsma, H; Gadella, B M

    2011-09-01

    This contribution provides an overview of approaches to correlate sow fertility data with boar semen quality characteristics. Large data sets of fertility data and ejaculate data are more suitable to analyse effects of semen quality characteristics on field fertility. Variation in fertility in sows is large. The effect of semen factors is relatively small and therefore impossible to find in smaller data sets. Large data sets allow for statistical corrections on both sow- and boar-related parameters. Remaining sow fertility variation can then be assigned to semen quality parameters, which is of huge interest to AI (artificial insemination) companies. Previous studies of Varkens KI Nederland to find the contribution to field fertility of (i) the number of sperm cells in an insemination dose, (ii) the sperm motility and morphological defects and (iii) the age of semen at the moment of insemination are discussed in context of the possibility to apply such knowledge to select boars on the basis of their sperm parameters for AI purposes. © 2011 Blackwell Verlag GmbH.

  1. Time-Course Gene Set Analysis for Longitudinal Gene Expression Data

    PubMed Central

    Hejblum, Boris P.; Skinner, Jason; Thiébaut, Rodolphe

    2015-01-01

    Gene set analysis methods, which consider predefined groups of genes in the analysis of genomic data, have been successfully applied for analyzing gene expression data in cross-sectional studies. The time-course gene set analysis (TcGSA) introduced here is an extension of gene set analysis to longitudinal data. The proposed method relies on random effects modeling with maximum likelihood estimates. It allows to use all available repeated measurements while dealing with unbalanced data due to missing at random (MAR) measurements. TcGSA is a hypothesis driven method that identifies a priori defined gene sets with significant expression variations over time, taking into account the potential heterogeneity of expression within gene sets. When biological conditions are compared, the method indicates if the time patterns of gene sets significantly differ according to these conditions. The interest of the method is illustrated by its application to two real life datasets: an HIV therapeutic vaccine trial (DALIA-1 trial), and data from a recent study on influenza and pneumococcal vaccines. In the DALIA-1 trial TcGSA revealed a significant change in gene expression over time within 69 gene sets during vaccination, while a standard univariate individual gene analysis corrected for multiple testing as well as a standard a Gene Set Enrichment Analysis (GSEA) for time series both failed to detect any significant pattern change over time. When applied to the second illustrative data set, TcGSA allowed the identification of 4 gene sets finally found to be linked with the influenza vaccine too although they were found to be associated to the pneumococcal vaccine only in previous analyses. In our simulation study TcGSA exhibits good statistical properties, and an increased power compared to other approaches for analyzing time-course expression patterns of gene sets. The method is made available for the community through an R package. PMID:26111374

  2. Diurnal global variability of the Earth's magnetic field during geomagnetically quiet conditions

    NASA Astrophysics Data System (ADS)

    Klausner, V.

    2012-12-01

    This work proposes a methodology (or treatment) to establish a representative signal of the global magnetic diurnal variation. It is based on a spatial distribution in both longitude and latitude of a set of magnetic stations as well as their magnetic behavior on a time basis. We apply the Principal Component Analysis (PCA) technique using gapped wavelet transform and wavelet correlation. This new approach was used to describe the characteristics of the magnetic variations at Vassouras (Brazil) and 12 other magnetic stations spread around the terrestrial globe. Using magnetograms from 2007, we have investigated the global dominant pattern of the Sq variation as a function of low solar activity. This year was divided into two seasons for seasonal variation analysis: solstices (June and December) and equinoxes (March and September). We aim to reconstruct the original geomagnetic data series of the H component taking into account only the diurnal variations with periods of 24 hours on geomagnetically quiet days. We advance a proposal to reconstruct the Sq baseline using only the PCA first mode. The first interpretation of the results suggests that PCA/wavelet method could be used to the reconstruction of the Sq baseline.

  3. Prioritization for conservation of northern European cattle breeds based on analysis of microsatellite data.

    PubMed

    Tapio, I; Värv, S; Bennewitz, J; Maleviciute, J; Fimland, E; Grislis, Z; Meuwissen, T H E; Miceikiene, I; Olsaker, I; Viinalass, H; Vilkki, J; Kantanen, J

    2006-12-01

    Northern European indigenous cattle breeds are currently endangered and at a risk of becoming extinct. We analyzed variation at 20 microsatellite loci in 23 indigenous, 3 old imported, and 9 modern commercial cattle breeds that are presently distributed in northern Europe. We measured the breeds' allelic richness and heterozygosity, and studied their genetic relationships with a neighbor-joining tree based on the Chord genetic distance matrix. We used the Weitzman approach and the core set diversity measure of Eding et al. (2002) to quantify the contribution of each breed to the maximum amount of genetic diversity and to identify breeds important for the conservation of genetic diversity. We defined 11 breeds as a "safe set" of breeds (not endangered) and estimated a reduction in genetic diversity if all nonsafe (endangered) breeds were lost. We then calculated the increase in genetic diversity by adding one by one each of the nonsafe breeds to the safe set (the safe-set-plus-one approach). The neighbor-joining tree grouped the northern European cattle breeds into Black-and-White type, Baltic Red, and Nordic cattle groups. Väne cattle, Bohus Poll, and Danish Jersey had the highest relative contribution to the maximum amount of genetic diversity when the diversity was quantified by the Weitzman diversity measure. These breeds not only showed phylogenetic distinctiveness but also low within-population variation. When the Eding et al. method was applied, Eastern Finncattle and Lithuanian White Backed cattle contributed most of the genetic variation. If the loss of the nonsafe set of breeds happens, the reduction in genetic diversity would be substantial (72%) based on the Weitzman approach, but relatively small (1.81%) based on the Eding et al. method. The safe set contained only 66% of the observed microsatellite alleles. The safe-set-plus-one approach indicated that Bohus Poll and Väne cattle contributed most to the Weitzman diversity, whereas the Eastern Finncattle contribution was the highest according to the Eding et al. method. Our results indicate that both methods of Weitzman and Eding et al. recognize the importance of local populations as a valuable resource of genetic variation.

  4. A level-set procedure for the design of electromagnetic metamaterials.

    PubMed

    Zhou, Shiwei; Li, Wei; Sun, Guangyong; Li, Qing

    2010-03-29

    Achieving negative permittivity and negative permeability signifies a key topic of research in the design of metamaterials. This paper introduces a level-set based topology optimization method, in which the interface between the vacuum and metal phases is implicitly expressed by the zero-level contour of a higher dimensional level-set function. Following a sensitivity analysis, the optimization maximizes the objective based on the normal direction of the level-set function and induced current flow, thereby generating the desirable patterns of current flow on metal surface. As a benchmark example, the U-shaped structure and its variations are obtained from the level-set topology optimization. Numerical examples demonstrate that both negative permittivity and negative permeability can be attained.

  5. Additive genetic variation and evolvability of a multivariate trait can be increased by epistatic gene action.

    PubMed

    Griswold, Cortland K

    2015-12-21

    Epistatic gene action occurs when mutations or alleles interact to produce a phenotype. Theoretically and empirically it is of interest to know whether gene interactions can facilitate the evolution of diversity. In this paper, we explore how epistatic gene action affects the additive genetic component or heritable component of multivariate trait variation, as well as how epistatic gene action affects the evolvability of multivariate traits. The analysis involves a sexually reproducing and recombining population. Our results indicate that under stabilizing selection conditions a population with a mixed additive and epistatic genetic architecture can have greater multivariate additive genetic variation and evolvability than a population with a purely additive genetic architecture. That greater multivariate additive genetic variation can occur with epistasis is in contrast to previous theory that indicated univariate additive genetic variation is decreased with epistasis under stabilizing selection conditions. In a multivariate setting, epistasis leads to less relative covariance among individuals in their genotypic, as well as their breeding values, which facilitates the maintenance of additive genetic variation and increases a population׳s evolvability. Our analysis involves linking the combinatorial nature of epistatic genetic effects to the ancestral graph structure of a population to provide insight into the consequences of epistasis on multivariate trait variation and evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Preliminary shape analysis of the outline of the baropodometric foot: patterns of covariation, allometry, sex and age differences, and loading variations.

    PubMed

    Bruner, E; Mantini, S; Guerrini, V; Ciccarelli, A; Giombini, A; Borrione, P; Pigozzi, F; Ripani, M

    2009-09-01

    Baropodometrical digital techniques map the pressures exerted on the foot plant during both static and dynamic loadings. The study of the distribution of such pressures makes it possible to evaluate the postural and locomotory biomechanics together with its pathological variations. This paper is aimed at evaluating the integration between baropodometric analysis (pressure distribution) and geometrical models (shape of the footprints), investigating the pattern of variation associated with normal plantar morphology. The sample includes 91 individuals (47 males, 44 females), ranging from 5 to 85 years of age (mean and standard deviation = 40 + or - 24).The first component of variation is largely associated with the breadth of the isthmus, along a continuous gradient of increasing/decreasing flattening of the foot plant. This character being dominant upon the whole set of morphological components even in a non-pathological sample, such multivariate computation may represent a good diagnostic tool to quantify its degree of expression in individual subject or group samples. Sexual differences are not significant, and allometric variations associated with increasing plantar surface or stature are not quantitatively relevant. There are some differences between adult and young individuals, associated in the latter with a widening of the medial and posterior areas. These results provide a geometrical framework of baropodometrical analysis, suggesting possible future applications in diagnosis and basic research.

  7. An Analysis of Department of Defense Instruction 8500.2 'Information Assurance (IA) Implementation.'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip LaRoche

    2012-01-01

    The Department of Defense (DoD) provides its standard for information assurance in its Instruction 8500.2, dated February 6, 2003. This Instruction lists 157 'IA Controls' for nine 'baseline IA levels.' Aside from distinguishing IA Controls that call for elevated levels of 'robustness' and grouping the IA Controls into eight 'subject areas' 8500.2 does not examine the nature of this set of controls, determining, for example, which controls do not vary in robustness, how this set of controls compares with other such sets, or even which controls are required for all nine baseline IA levels. This report analyzes (1) the IAmore » Controls, (2) the subject areas, and (3) the Baseline IA levels. For example, this report notes that there are only 109 core IA Controls (which this report refers to as 'ICGs'), that 43 of these core IA Controls apply without variation to all nine baseline IA levels and that an additional 31 apply with variations. This report maps the IA Controls of 8500.2 to the controls in NIST 800-53 and ITGI's CoBIT. The result of this analysis and mapping, as shown in this report, serves as a companion to 8500.2. (An electronic spreadsheet accompanies this report.)« less

  8. Combinations of NIR, Raman spectroscopy and physicochemical measurements for improved monitoring of solvent extraction processes using hierarchical multivariate analysis models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nee, K.; Bryan, S.; Levitskaia, T.

    The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less

  9. Combinations of NIR, Raman spectroscopy and physicochemical measurements for improved monitoring of solvent extraction processes using hierarchical multivariate analysis models

    DOE PAGES

    Nee, K.; Bryan, S.; Levitskaia, T.; ...

    2017-12-28

    The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less

  10. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  11. Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.

    2015-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.

  12. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.

    PubMed

    Guo, Lu; Shen, Shuming; Harris, Eleanor; Wang, Zheng; Jiang, Wei; Guo, Yu; Feng, Yuanming

    2014-01-01

    To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors. A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV) delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT) and tri-modality (MRI/CT/PET) image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV), the average distance between surface and centroid (ADSC), and the local standard deviation (SDlocal). Analysis of COV was also performed to evaluate intra-observer volume variation. The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09) and 0.07(± 0.01) for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05) with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm) and patient 3 (from 0.42 cm to 0.36 cm) with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00) with the tri-modality method as compared with using the dual-modality method. With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.

  13. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<

  14. [Value(s)].

    PubMed

    Vanbelle, G

    2006-01-01

    After a short explanation of the word value, the (cultural) value of teeth, the economic evaluation of dentistry and the payment criteria are being presented. The specific situation of health care as a service which deviates in quite a few aspects from the standard demand-supply model is being pointed out. Attention is drawn to key characteristics of the liberal professions such as the obligation to perform to the best of one's ability, not to a specific result. Function classification appears to offer possibilities for cataloguing the wide variation in practice settings. The well-known wage calculation of Professor De Lembre is being reviewed. Subsequently the analysis of cost variation and induced demand by extra services and profile shaping is being elaborated. Cost-benefit-analysis is the concluding item.

  15. 21 year timing of the black-widow pulsar J2051-0827

    NASA Astrophysics Data System (ADS)

    Shaifullah, G.; Verbiest, J. P. W.; Freire, P. C. C.; Tauris, T. M.; Wex, N.; Osłowski, S.; Stappers, B. W.; Bassa, C. G.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Graikou, E.; Guillemot, L.; Janssen, G. H.; Jessner, A.; Jordan, C.; Karuppusamy, R.; Kramer, M.; Lazaridis, K.; Lazarus, P.; Lyne, A. G.; McKee, J. W.; Perrodin, D.; Possenti, A.; Tiburzi, C.

    2016-10-01

    Timing results for the black-widow pulsar J2051-0827 are presented, using a 21 year data set from four European Pulsar Timing Array telescopes and the Parkes radio telescope. This data set, which is the longest published to date for a black-widow system, allows for an improved analysis that addresses previously unknown biases. While secular variations, as identified in previous analyses, are recovered, short-term variations are detected for the first time. Concurrently, a significant decrease of ˜ 2.5 × 10- 3 cm- 3 pc in the dispersion measure associated with PSR J2051-0827 is measured for the first time and improvements are also made to estimates of the proper motion. Finally, PSR J2051-0827 is shown to have entered a relatively stable state suggesting the possibility of its eventual inclusion in pulsar timing arrays.

  16. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  17. In Silico Detection of Sequence Variations Modifying Transcriptional Regulation

    PubMed Central

    Andersen, Malin C; Engström, Pär G; Lithwick, Stuart; Arenillas, David; Eriksson, Per; Lenhard, Boris; Wasserman, Wyeth W; Odeberg, Jacob

    2008-01-01

    Identification of functional genetic variation associated with increased susceptibility to complex diseases can elucidate genes and underlying biochemical mechanisms linked to disease onset and progression. For genes linked to genetic diseases, most identified causal mutations alter an encoded protein sequence. Technological advances for measuring RNA abundance suggest that a significant number of undiscovered causal mutations may alter the regulation of gene transcription. However, it remains a challenge to separate causal genetic variations from linked neutral variations. Here we present an in silico driven approach to identify possible genetic variation in regulatory sequences. The approach combines phylogenetic footprinting and transcription factor binding site prediction to identify variation in candidate cis-regulatory elements. The bioinformatics approach has been tested on a set of SNPs that are reported to have a regulatory function, as well as background SNPs. In the absence of additional information about an analyzed gene, the poor specificity of binding site prediction is prohibitive to its application. However, when additional data is available that can give guidance on which transcription factor is involved in the regulation of the gene, the in silico binding site prediction improves the selection of candidate regulatory polymorphisms for further analyses. The bioinformatics software generated for the analysis has been implemented as a Web-based application system entitled RAVEN (regulatory analysis of variation in enhancers). The RAVEN system is available at http://www.cisreg.ca for all researchers interested in the detection and characterization of regulatory sequence variation. PMID:18208319

  18. The Challenge of Teacher Retention in Urban Schools: Evidence of Variation from a Cross-Site Analysis

    ERIC Educational Resources Information Center

    Papay, John P.; Bacher-Hicks, Andrew; Page, Lindsay C.; Marinell, William H.

    2017-01-01

    Substantial teacher turnover poses a challenge to staffing public schools with effective teachers. The scope of the teacher retention challenge across school districts, however, remains poorly defined. Applying consistent data practices and analytical techniques to administrative data sets from 16 urban districts, we document substantial…

  19. A Rapid PCR-RFLP Method for Monitoring Genetic Variation among Commercial Mushroom Species

    ERIC Educational Resources Information Center

    Martin, Presley; Muruke, Masoud; Hosea, Kenneth; Kivaisi, Amelia; Zerwas, Nick; Bauerle, Cynthia

    2004-01-01

    We report the development of a simplified procedure for restriction fragment length polymorphism (RFLP) analysis of mushrooms. We have adapted standard molecular techniques to be amenable to an undergraduate laboratory setting in order to allow students to explore basic questions about fungal diversity and relatedness among mushroom species. The…

  20. Understanding LiP Promoters from Phanerochaete chrysosporium: A Bioinformatic Analysis

    Treesearch

    Sergio Lobos; Rubén Polanco; Mario Tello; Dan Cullen; Daniela Seelenfreund; Rafael Vicuña

    2011-01-01

    DNA contains the coding information for the entire set of proteins produced by an organism. The specific combination of proteins synthesized varies with developmental, metabolic and environmental circumstances. This variation is generated by regulatory mechanisms that direct the production of messenger ribonucleic acid (mRNA) and subsequent translation of the...

  1. Using Meta-Analysis to Explain Variation in Head Start Research Results: The Role of Research Design

    ERIC Educational Resources Information Center

    Shager, Hilary M.; Schindler, Holly S.; Hart, Cassandra M.D.; Duncan, Greg J.; Magnuson, Katherine A.; Yoshikawa, Hirokazu

    2010-01-01

    Head Start was designed as a holistic intervention to improve economically disadvantaged, preschool-aged children's cognitive and social development by providing a comprehensive set of educational, health, nutritional, and social services, as well as opportunities for parent involvement (Zigler & Valentine, 1979). Given the current interest in ECE…

  2. Social Interaction and Stereotypic Responses to Homosexuals.

    ERIC Educational Resources Information Center

    Farrell, Ronald A.; Morrione, Thomas J.

    This work focuses on the variations in societal responses perceived by male homosexuals in various group settings of interaction and on the relationship of these responses to their social status and related behavioral characteristics. Conclusions were based on the analysis of data collected from a sampling of 148 male homosexuals in and around a…

  3. Sporulation genes associated with sporulation efficiency in natural isolates of yeast.

    PubMed

    Tomar, Parul; Bhatia, Aatish; Ramdas, Shweta; Diao, Liyang; Bhanot, Gyan; Sinha, Himanshu

    2013-01-01

    Yeast sporulation efficiency is a quantitative trait and is known to vary among experimental populations and natural isolates. Some studies have uncovered the genetic basis of this variation and have identified the role of sporulation genes (IME1, RME1) and sporulation-associated genes (FKH2, PMS1, RAS2, RSF1, SWS2), as well as non-sporulation pathway genes (MKT1, TAO3) in maintaining this variation. However, these studies have been done mostly in experimental populations. Sporulation is a response to nutrient deprivation. Unlike laboratory strains, natural isolates have likely undergone multiple selections for quick adaptation to varying nutrient conditions. As a result, sporulation efficiency in natural isolates may have different genetic factors contributing to phenotypic variation. Using Saccharomyces cerevisiae strains in the genetically and environmentally diverse SGRP collection, we have identified genetic loci associated with sporulation efficiency variation in a set of sporulation and sporulation-associated genes. Using two independent methods for association mapping and correcting for population structure biases, our analysis identified two linked clusters containing 4 non-synonymous mutations in genes - HOS4, MCK1, SET3, and SPO74. Five regulatory polymorphisms in five genes such as MLS1 and CDC10 were also identified as putative candidates. Our results provide candidate genes contributing to phenotypic variation in the sporulation efficiency of natural isolates of yeast.

  4. Sporulation Genes Associated with Sporulation Efficiency in Natural Isolates of Yeast

    PubMed Central

    Ramdas, Shweta; Diao, Liyang; Bhanot, Gyan; Sinha, Himanshu

    2013-01-01

    Yeast sporulation efficiency is a quantitative trait and is known to vary among experimental populations and natural isolates. Some studies have uncovered the genetic basis of this variation and have identified the role of sporulation genes (IME1, RME1) and sporulation-associated genes (FKH2, PMS1, RAS2, RSF1, SWS2), as well as non-sporulation pathway genes (MKT1, TAO3) in maintaining this variation. However, these studies have been done mostly in experimental populations. Sporulation is a response to nutrient deprivation. Unlike laboratory strains, natural isolates have likely undergone multiple selections for quick adaptation to varying nutrient conditions. As a result, sporulation efficiency in natural isolates may have different genetic factors contributing to phenotypic variation. Using Saccharomyces cerevisiae strains in the genetically and environmentally diverse SGRP collection, we have identified genetic loci associated with sporulation efficiency variation in a set of sporulation and sporulation-associated genes. Using two independent methods for association mapping and correcting for population structure biases, our analysis identified two linked clusters containing 4 non-synonymous mutations in genes – HOS4, MCK1, SET3, and SPO74. Five regulatory polymorphisms in five genes such as MLS1 and CDC10 were also identified as putative candidates. Our results provide candidate genes contributing to phenotypic variation in the sporulation efficiency of natural isolates of yeast. PMID:23874994

  5. Applicability of Cone Beam Computed Tomography to the Assessment of the Vocal Tract before and after Vocal Exercises in Normal Subjects.

    PubMed

    Garcia, Elisângela Zacanti; Yamashita, Hélio Kiitiro; Garcia, Davi Sousa; Padovani, Marina Martins Pereira; Azevedo, Renata Rangel; Chiari, Brasília Maria

    2016-01-01

    Cone beam computed tomography (CBCT), which represents an alternative to traditional computed tomography and magnetic resonance imaging, may be a useful instrument to study vocal tract physiology related to vocal exercises. This study aims to evaluate the applicability of CBCT to the assessment of variations in the vocal tract of healthy individuals before and after vocal exercises. Voice recordings and CBCT images before and after vocal exercises performed by 3 speech-language pathologists without vocal complaints were collected and compared. Each participant performed 1 type of exercise, i.e., Finnish resonance tube technique, prolonged consonant "b" technique, or chewing technique. The analysis consisted of an acoustic analysis and tomographic imaging. Modifications of the vocal tract settings following vocal exercises were properly detected by CBCT, and changes in the acoustic parameters were, for the most part, compatible with the variations detected in image measurements. CBCT was shown to be capable of properly assessing the changes in vocal tract settings promoted by vocal exercises. © 2017 S. Karger AG, Basel.

  6. Development of new SNP derived cleaved amplified polymorphic sequence marker set and its successful utilization in the genetic analysis of seed color variation in barley.

    PubMed

    Bungartz, Annemarie; Klaus, Marius; Mathew, Boby; Léon, Jens; Naz, Ali Ahmad

    2016-03-01

    The aim of the present study was to develop a new cost effective PCR based CAPS marker set using advantages of high-throughput SNP genotyping. Initially, SNP survey was made using 20 diverse barley genotypes via 9k iSelect array genotyping that resulted in 6334 polymorphic SNP markers. Principle component analysis using this marker data showed fine differentiation of barley diverse gene pool. Till this end, we developed 200 SNP derived CAPS markers distributed across the genome covering around 991cM with an average marker density of 5.09cM. Further, we genotyped 68 CAPS markers in an F2 population (Cheri×ICB181160) segregating for seed color variation in barley. Genetic mapping of seed color revealed putative linkage of single nuclear gene on chromosome 1H. These findings showed the proof of concept for the development and utility of a newer cost effective genomic tool kit to analyze broader genetic resources of barley worldwide. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    PubMed

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  8. Standing Waves in an Elastic Spring: A Systematic Study by Video Analysis

    NASA Astrophysics Data System (ADS)

    Ventura, Daniel Rodrigues; de Carvalho, Paulo Simeão; Dias, Marco Adriano

    2017-04-01

    The word "wave" is part of the daily language of every student. However, the physical understanding of the concept demands a high level of abstract thought. In physics, waves are oscillating variations of a physical quantity that involve the transfer of energy from one point to another, without displacement of matter. A wave can be formed by an elastic deformation, a variation of pressure, changes in the intensity of electric or magnetic fields, a propagation of a temperature variation, or other disturbances. Moreover, a wave can be categorized as pulsed or periodic. Most importantly, conditions can be set such that waves interfere with one another, resulting in standing waves. These have many applications in technology, although they are not always readily identified and/or understood by all students. In this work, we use a simple setup including a low-cost constant spring, such as a Slinky, and the free software Tracker for video analysis. We show they can be very useful for the teaching of mechanical wave propagation and the analysis of harmonics in standing waves.

  9. Heritability of body size in the polar bears of Western Hudson Bay.

    PubMed

    Malenfant, René M; Davis, Corey S; Richardson, Evan S; Lunn, Nicholas J; Coltman, David W

    2018-04-18

    Among polar bears (Ursus maritimus), fitness is dependent on body size through males' abilities to win mates, females' abilities to provide for their young and all bears' abilities to survive increasingly longer fasting periods caused by climate change. In the Western Hudson Bay subpopulation (near Churchill, Manitoba, Canada), polar bears have declined in body size and condition, but nothing is known about the genetic underpinnings of body size variation, which may be subject to natural selection. Here, we combine a 4449-individual pedigree and an array of 5,433 single nucleotide polymorphisms (SNPs) to provide the first quantitative genetic study of polar bears. We used animal models to estimate heritability (h 2 ) among polar bears handled between 1966 and 2011, obtaining h 2 estimates of 0.34-0.48 for strictly skeletal traits and 0.18 for axillary girth (which is also dependent on fatness). We genotyped 859 individuals with the SNP array to test for marker-trait association and combined p-values over genetic pathways using gene-set analysis. Variation in all traits appeared to be polygenic, but we detected one region of moderately large effect size in body length near a putative noncoding RNA in an unannotated region of the genome. Gene-set analysis suggested that variation in body length was associated with genes in the regulatory cascade of cyclin expression, which has previously been associated with body size in mice. A greater understanding of the genetic architecture of body size variation will be valuable in understanding the potential for adaptation in polar bear populations challenged by climate change. © 2018 John Wiley & Sons Ltd.

  10. Genetic spectrum of low density lipoprotein receptor gene variations in South Indian population.

    PubMed

    ArulJothi, K N; Suruthi Abirami, B; Devi, Arikketh

    2018-03-01

    Low density lipoprotein receptor (LDLR) is a membrane bound receptor maintaining cholesterol homeostasis along with Apolipoprotein B (APOB), Proprotein Convertase Subtilisin/Kexin type 9 (PCSK9) and other genes of lipid metabolism. Any pathogenic variation in these genes alters the function of the receptor and leads to Familial Hypercholesterolemia (FH) and other cardiovascular diseases. This study was aimed at screening the LDLR, APOB and PCSK9 genes in Hypercholesterolemic patients to define the genetic spectrum of FH in Indian population. Familial Hypercholesterolemia patients (n=78) of South Indian Tamil population with LDL cholesterol and Total cholesterol levels above 4.9mmol/l and 7.5mmol/l with family history of Myocardial infarction were involved. DNA was isolated by organic extraction method from blood samples and LDLR, APOB and PCSK9 gene exons were amplified using primers that cover exon-intron boundaries. The amplicons were screened using High Resolution Melt (HRM) Analysis and the screened samples were sequenced after purification. This study reports 20 variations in South Indian population for the first time. In this set of variations 9 are novel variations which are reported for the first time, 11 were reported in other studies also. The in silico analysis for all the variations detected in this study were done to predict the probabilistic effect in pathogenicity of FH. This study adds 9 novel variations and 11 recurrent variations to the spectrum of LDLR gene mutations in Indian population. All these variations are reported for the first time in Indian population. This spectrum of variations was different from the variations of previous Indian reports. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Interpretation of clinical relevance of X-chromosome copy number variations identified in a large cohort of individuals with cognitive disorders and/or congenital anomalies.

    PubMed

    Willemsen, Marjolein H; de Leeuw, Nicole; de Brouwer, Arjan P M; Pfundt, Rolph; Hehir-Kwa, Jayne Y; Yntema, Helger G; Nillesen, Willy M; de Vries, Bert B A; van Bokhoven, Hans; Kleefstra, Tjitske

    2012-11-01

    Genome-wide array studies are now routinely being used in the evaluation of patients with cognitive disorders (CD) and/or congenital anomalies (CA). Therefore, inevitably each clinician is confronted with the challenging task of the interpretation of copy number variations detected by genome-wide array platforms in a diagnostic setting. Clinical interpretation of autosomal copy number variations is already challenging, but assessment of the clinical relevance of copy number variations of the X-chromosome is even more complex. This study provides an overview of the X-Chromosome copy number variations that we have identified by genome-wide array analysis in a large cohort of 4407 male and female patients. We have made an interpretation of the clinical relevance of each of these copy number variations based on well-defined criteria and previous reports in literature and databases. The prevalence of X-chromosome copy number variations in this cohort was 57/4407 (∼1.3%), of which 15 (0.3%) were interpreted as (likely) pathogenic. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  12. Standardized method to quantify the variation in voxel value distribution in patient-simulated CBCT data sets.

    PubMed

    Spin-Neto, R; Gotfredsen, E; Wenzel, A

    2015-01-01

    To suggest a standardized method to assess the variation in voxel value distribution in patient-simulated CBCT data sets and the effect of time between exposures (TBE). Additionally, a measurement of reproducibility, Aarhus measurement of reproducibility (AMORe), is introduced, which could be used for quality assurance purposes. Six CBCT units were tested [Cranex(®) 3D/CRAN (Soredex Oy, Tuusula, Finland); Scanora(®) 3D/SCAN (Soredex Oy); NewTom™ 5G/NEW5 (QR srl, Verona, Italy); i-CAT/ICAT (Imaging Sciences International, Hatfield, PA); 3D Accuitomo FPD80/ACCU (Morita, Kyoto, Japan); and NewTom VG/NEWV (QR srl)]. Two sets of volumetric data of a wax-imbedded dry human skull (containing a titanium implant) were acquired by each CBCT unit at two sessions on separate days. Each session consisted 21 exposures: 1 "initial" followed by a 30-min interval (initial data set), 10 acquired with 30-min TBE (data sets 1-10) and 10 acquired with 15-min TBE (data sets 11-20). CBCT data were exported as digital imaging and communications in medicine files and converted to text files containing x, y and z positions and grey shade for each voxel. Subtractions were performed voxel-by-voxel in two set-ups: (1) between two consecutive data sets and (2) between any subsequent data set and data set 1. The mean grey shade variation for each voxel was calculated for each unit/session. The largest mean grey shade variation was found in the subtraction set-up 2 (27-447 shades of grey, depending on the unit). Considering subtraction set-up 1, the highest variation was seen for NEW5, between data sets 1 and the initial. Discrepancies in voxel value distribution were found by comparing the initial examination of the day with the subsequent examinations. TBE had no predictable effect on the variation of CBCT-derived voxel values. AMORe ranged between 0 and 64.

  13. A Piecewise Local Partial Least Squares (PLS) Method for the Quantitative Analysis of Plutonium Nitrate Solutions

    DOE PAGES

    Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.

    2017-10-05

    Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less

  14. A Piecewise Local Partial Least Squares (PLS) Method for the Quantitative Analysis of Plutonium Nitrate Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.

    Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less

  15. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    NASA Astrophysics Data System (ADS)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  16. Facial patterns in a tropical social wasp correlate with colony membership

    NASA Astrophysics Data System (ADS)

    Baracchi, David; Turillazzi, Stefano; Chittka, Lars

    2016-10-01

    Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.

  17. Organization and variation analysis of 5S rDNA in gynogenetic offspring of Carassius auratus red var. (♀) × Megalobrama amblycephala (♂).

    PubMed

    Qin, QinBo; Wang, Juan; Wang, YuDe; Liu, Yun; Liu, ShaoJun

    2015-03-13

    The offspring with 100 chromosomes (abbreviated as GRCC) have been obtained in the first generation of Carassius auratus red var. (abbreviated as RCC, 2n = 100) (♀) × Megalobrama amblycephala (abbreviated as BSB, 2n = 48) (♂), in which the females and unexpected males both are found. Chromosomal and karyotypic analysis has been reported in GRCC which gynogenesis origin has been suggested, but lack genetic evidence. Fluorescence in situ hybridization with species-specific centromere probes directly proves that GRCC possess two sets of RCC-derived chromosomes. Sequence analysis of the coding region (5S) and adjacent nontranscribed spacer (abbreviated as NTS) reveals that three types of 5S rDNA class (class I; class II and class III) in GRCC are completely inherited from their female parent (RCC), and show obvious base variations and insertions-deletions. Fluorescence in situ hybridization with the entire 5S rDNA probe reveals obvious chromosomal loci (class I and class II) variation in GRCC. This paper provides directly genetic evidence that GRCC is gynogenesis origin. In addition, our result is also reveals that distant hybridization inducing gynogenesis can lead to sequence and partial chromosomal loci of 5S rDNA gene obvious variation.

  18. A combination of PhP typing and β-d-glucuronidase gene sequence variation analysis for differentiation of Escherichia coli from humans and animals.

    PubMed

    Masters, N; Christie, M; Katouli, M; Stratton, H

    2015-06-01

    We investigated the usefulness of the β-d-glucuronidase gene variance in Escherichia coli as a microbial source tracking tool using a novel algorithm for comparison of sequences from a prescreened set of host-specific isolates using a high-resolution PhP typing method. A total of 65 common biochemical phenotypes belonging to 318 E. coli strains isolated from humans and domestic and wild animals were analysed for nucleotide variations at 10 loci along a 518 bp fragment of the 1812 bp β-d-glucuronidase gene. Neighbour-joining analysis of loci variations revealed 86 (76.8%) human isolates and 91.2% of animal isolates were correctly identified. Pairwise hierarchical clustering improved assignment; where 92 (82.1%) human and 204 (99%) animal strains were assigned to their respective cluster. Our data show that initial typing of isolates and selection of common types from different hosts prior to analysis of the β-d-glucuronidase gene sequence improves source identification. We also concluded that numerical profiling of the nucleotide variations can be used as a valuable approach to differentiate human from animal E. coli. This study signifies the usefulness of the β-d-glucuronidase gene as a marker for differentiating human faecal pollution from animal sources.

  19. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  20. Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    PubMed Central

    Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai

    2015-01-01

    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228

  1. Automatic alignment of individual peaks in large high-resolution spectral data sets

    NASA Astrophysics Data System (ADS)

    Stoyanova, Radka; Nicholls, Andrew W.; Nicholson, Jeremy K.; Lindon, John C.; Brown, Truman R.

    2004-10-01

    Pattern recognition techniques are effective tools for reducing the information contained in large spectral data sets to a much smaller number of significant features which can then be used to make interpretations about the chemical or biochemical system under study. Often the effectiveness of such approaches is impeded by experimental and instrument induced variations in the position, phase, and line width of the spectral peaks. Although characterizing the cause and magnitude of these fluctuations could be important in its own right (pH-induced NMR chemical shift changes, for example) in general they obscure the process of pattern discovery. One major area of application is the use of large databases of 1H NMR spectra of biofluids such as urine for investigating perturbations in metabolic profiles caused by drugs or disease, a process now termed metabonomics. Frequency shifts of individual peaks are the dominant source of such unwanted variations in this type of data. In this paper, an automatic procedure for aligning the individual peaks in the data set is described and evaluated. The proposed method will be vital for the efficient and automatic analysis of large metabonomic data sets and should also be applicable to other types of data.

  2. The use of biomarkers to describe plasma-, red cell-, and blood volume from a simple blood test.

    PubMed

    Lobigs, Louisa Margit; Sottas, Pierre-Edouard; Bourdon, Pitre Collier; Nikolovski, Zoran; El-Gingo, Mohamed; Varamenti, Evdokia; Peeling, Peter; Dawson, Brian; Schumacher, Yorck Olaf

    2017-01-01

    Plasma volume and red cell mass are key health markers used to monitor numerous disease states, such as heart failure, kidney disease, or sepsis. Nevertheless, there is currently no practically applicable method to easily measure absolute plasma or red cell volumes in a clinical setting. Here, a novel marker for plasma volume and red cell mass was developed through analysis of the observed variability caused by plasma volume shifts in common biochemical measures, selected based on their propensity to present with low variations over time. Once a month for 6 months, serum and whole blood samples were collected from 33 active males. Concurrently, the CO-rebreathing method was applied to determine target levels of hemoglobin mass (HbM) and blood volumes. The variability of 18 common chemistry markers and 27 Full Blood Count variables was investigated and matched to the observed plasma volume variation. After the removal of between-subject variations using a Bayesian model, multivariate analysis identified two sets of 8 and 15 biomarkers explaining 68% and 69% of plasma volume variance, respectively. The final multiparametric model contains a weighting function to allow for isolated abnormalities in single biomarkers. This proof-of-concept investigation describes a novel approach to estimate absolute vascular volumes, with a simple blood test. Despite the physiological instability of critically ill patients, it is hypothesized the model, with its multiparametric approach and weighting function, maintains the capacity to describe vascular volumes. This model has potential to transform volume management in clinical settings. Am. J. Hematol. 92:62-67, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. The Costs of Delivering Integrated HIV and Sexual Reproductive Health Services in Limited Resource Settings.

    PubMed

    Obure, Carol Dayo; Sweeney, Sedona; Darsamo, Vanessa; Michaels-Igbokwe, Christine; Guinness, Lorna; Terris-Prestholt, Fern; Muketo, Esther; Nhlabatsi, Zelda; Warren, Charlotte E; Mayhew, Susannah; Watts, Charlotte; Vassall, Anna

    2015-01-01

    To present evidence on the total costs and unit costs of delivering six integrated sexual reproductive health and HIV services in a high and medium HIV prevalence setting, in order to support policy makers and planners scaling up these essential services. A retrospective facility based costing study conducted in 40 non-government organization and public health facilities in Kenya and Swaziland. Economic and financial costs were collected retrospectively for the year 2010/11, from each study site with an aim to estimate the cost per visit of six integrated HIV and SRH services. A full cost analysis using a combination of bottom-up and step-down costing methods was conducted from the health provider's perspective. The main unit of analysis is the economic unit cost per visit for each service. Costs are converted to 2013 International dollars. The mean cost per visit for the HIV/SRH services ranged from $Int 14.23 (PNC visit) to $Int 74.21 (HIV treatment visit). We found considerable variation in the unit costs per visit across settings with family planning services exhibiting the least variation ($Int 6.71-52.24) and STI treatment and HIV treatment visits exhibiting the highest variation in unit cost ranging from ($Int 5.44-281.85) and ($Int 0.83-314.95), respectively. Unit costs of visits were driven by fixed costs while variability in visit costs across facilities was explained mainly by technology used and service maturity. For all services, variability in unit costs and cost components suggest that potential exists to reduce costs through better use of both human and capital resources, despite the high proportion of expenditure on drugs and medical supplies. Further work is required to explore the key drivers of efficiency and interventions that may facilitate efficiency improvements.

  4. The central star candidate of the planetary nebula Sh2-71: photometric and spectroscopic variability

    NASA Astrophysics Data System (ADS)

    Močnik, T.; Lloyd, M.; Pollacco, D.; Street, R. A.

    2015-07-01

    We present the analysis of several newly obtained and archived photometric and spectroscopic data sets of the intriguing and yet poorly understood 13.5 mag central star candidate of the bipolar planetary nebula Sh2-71. Photometric observations confirmed the previously determined quasi-sinusoidal light curve with a period of 68 d and also indicated periodic sharp brightness dips, possibly eclipses, with a period of 17.2 d. In addition, the comparison between U and V light curves revealed that the 68 d brightness variations are accompanied by a variable reddening effect of ΔE(U - V) = 0.38. Spectroscopic data sets demonstrated pronounced variations in spectral profiles of Balmer, helium and singly ionized metal lines and indicated that these variations occur on a time-scale of a few days. The most accurate verification to date revealed that spectral variability is not correlated with the 68 d brightness variations. The mean radial velocity of the observed star was measured to be ˜26 km s-1 with an amplitude of ±40 km s-1. The spectral type was determined to be B8V through spectral comparison with synthetic and standard spectra. The newly proposed model for the central star candidate is a Be binary with a misaligned precessing disc.

  5. Visualisation of variable binding pockets on protein surfaces by probabilistic analysis of related structure sets.

    PubMed

    Ashford, Paul; Moss, David S; Alex, Alexander; Yeap, Siew K; Povia, Alice; Nobeli, Irene; Williams, Mark A

    2012-03-14

    Protein structures provide a valuable resource for rational drug design. For a protein with no known ligand, computational tools can predict surface pockets that are of suitable size and shape to accommodate a complementary small-molecule drug. However, pocket prediction against single static structures may miss features of pockets that arise from proteins' dynamic behaviour. In particular, ligand-binding conformations can be observed as transiently populated states of the apo protein, so it is possible to gain insight into ligand-bound forms by considering conformational variation in apo proteins. This variation can be explored by considering sets of related structures: computationally generated conformers, solution NMR ensembles, multiple crystal structures, homologues or homology models. It is non-trivial to compare pockets, either from different programs or across sets of structures. For a single structure, difficulties arise in defining particular pocket's boundaries. For a set of conformationally distinct structures the challenge is how to make reasonable comparisons between them given that a perfect structural alignment is not possible. We have developed a computational method, Provar, that provides a consistent representation of predicted binding pockets across sets of related protein structures. The outputs are probabilities that each atom or residue of the protein borders a predicted pocket. These probabilities can be readily visualised on a protein using existing molecular graphics software. We show how Provar simplifies comparison of the outputs of different pocket prediction algorithms, of pockets across multiple simulated conformations and between homologous structures. We demonstrate the benefits of use of multiple structures for protein-ligand and protein-protein interface analysis on a set of complexes and consider three case studies in detail: i) analysis of a kinase superfamily highlights the conserved occurrence of surface pockets at the active and regulatory sites; ii) a simulated ensemble of unliganded Bcl2 structures reveals extensions of a known ligand-binding pocket not apparent in the apo crystal structure; iii) visualisations of interleukin-2 and its homologues highlight conserved pockets at the known receptor interfaces and regions whose conformation is known to change on inhibitor binding. Through post-processing of the output of a variety of pocket prediction software, Provar provides a flexible approach to the analysis and visualization of the persistence or variability of pockets in sets of related protein structures.

  6. A multivariate variational objective analysis-assimilation method. Part 1: Development of the basic model

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Ochs, Harry T., III

    1988-01-01

    The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.

  7. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  8. Assessment of Uncertainties Related to Seismic Hazard Using Fuzzy Analysis

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, N.; Yokoi, T.; Javakhishvili, Z.

    2013-05-01

    Seismic hazard analysis in last few decades has been become very important issue. Recently, new technologies and available data have been improved that helped many scientists to understand where and why earthquakes happen, physics of earthquakes, etc. They have begun to understand the role of uncertainty in Seismic hazard analysis. However, there is still significant problem how to handle existing uncertainty. The same lack of information causes difficulties to quantify uncertainty accurately. Usually attenuation curves are obtained in statistical way: regression analysis. Statistical and probabilistic analysis show overlapped results for the site coefficients. This overlapping takes place not only at the border between two neighboring classes, but also among more than three classes. Although the analysis starts from classifying sites using the geological terms, these site coefficients are not classified at all. In the present study, this problem is solved using Fuzzy set theory. Using membership functions the ambiguities at the border between neighboring classes can be avoided. Fuzzy set theory is performed for southern California by conventional way. In this study standard deviations that show variations between each site class obtained by Fuzzy set theory and classical way are compared. Results on this analysis show that when we have insufficient data for hazard assessment site classification based on Fuzzy set theory shows values of standard deviations less than obtained by classical way which is direct proof of less uncertainty.

  9. Institutional considerations in priority setting: transactions cost perspective on PBMA.

    PubMed

    Jan, S

    2000-10-01

    Programme budgeting and marginal analysis (PBMA) is increasingly being used as a method of priority setting in the health care sector. Despite this, PBMA has, on occasions, been subject to problems in its application which can be seen as being 'institutional' in nature. This paper examines the extent to which the institutional setting of PBMA affects the way in which it can be conducted. In particular, a transactions costs perspective is taken to analyse the extent to which variation in such costs can alter the incentives of the individual participants. A number of recommendations for improving the sustainability of such projects is then provided. Following this, the implications which this 'institutional' approach has for the evaluation of PBMA are set out.

  10. Diurnal Variation of Hormonal and Lipid Biomarkers in a Molecular Epidemiology-Like Setting.

    PubMed

    van Kerkhof, Linda W M; Van Dycke, Kirsten C G; Jansen, Eugene H J M; Beekhof, Piet K; van Oostrom, Conny T M; Ruskovska, Tatjana; Velickova, Nevenka; Kamcev, Nikola; Pennings, Jeroen L A; van Steeg, Harry; Rodenburg, Wendy

    2015-01-01

    Many molecular epidemiology studies focusing on high prevalent diseases, such as metabolic disorders and cancer, investigate metabolic and hormonal markers. In general, sampling for these markers can occur at any time-point during the day or after an overnight fast. However, environmental factors, such as light exposure and food intake might affect the levels of these markers, since they provide input for the internal time-keeping system. When diurnal variation is larger than the inter-individual variation, time of day should be taken into account. Importantly, heterogeneity in diurnal variation and disturbance of circadian rhythms among a study population might increasingly occur as a result of our increasing 24/7 economy and related variation in exposure to environmental factors (such as light and food). The aim of the present study was to determine whether a set of often used biomarkers shows diurnal variation in a setting resembling large molecular epidemiology studies, i.e., non-fasted and limited control possibilities for other environmental influences. We show that markers for which diurnal variation is not an issue are adrenocorticotropic hormone, follicle stimulating hormone, estradiol and high-density lipoprotein. For all other tested markers diurnal variation was observed in at least one gender (cholesterol, cortisol, dehydroepiandrosterone sulfate, free fatty acids, low-density lipoprotein, luteinizing hormone, prolactin, progesterone, testosterone, triglycerides, total triiodothyronine and thyroid-stimulating hormone) or could not reliably be detected (human growth hormone). Thus, studies investigating these markers should take diurnal variation into account, for which we provide some options. Furthermore, our study indicates the need for investigating diurnal variation (in literature or experimentally) before setting up studies measuring markers in routine and controlled settings, especially since time-of-day likely matters for many more markers than the ones investigated in the present study.

  11. Multi-scale clustering of functional data with application to hydraulic gradients in wetlands

    USGS Publications Warehouse

    Greenwood, Mark C.; Sojda, Richard S.; Sharp, Julia L.; Peck, Rory G.; Rosenberry, Donald O.

    2011-01-01

    A new set of methods are developed to perform cluster analysis of functions, motivated by a data set consisting of hydraulic gradients at several locations distributed across a wetland complex. The methods build on previous work on clustering of functions, such as Tarpey and Kinateder (2003) and Hitchcock et al. (2007), but explore functions generated from an additive model decomposition (Wood, 2006) of the original time se- ries. Our decomposition targets two aspects of the series, using an adaptive smoother for the trend and circular spline for the diurnal variation in the series. Different measures for comparing locations are discussed, including a method for efficiently clustering time series that are of different lengths using a functional data approach. The complicated nature of these wetlands are highlighted by the shifting group memberships depending on which scale of variation and year of the study are considered.

  12. [Chromosomal variation in Chironomus plumosus L. (Diptera, Chironomidae) from populations of Bryansk region, Saratov region (Russia), and Gomel region (Belarus)].

    PubMed

    Belyanina, S I

    2015-02-01

    Cytogenetic analysis was performed on samples of Chironomus plumosus L. (Diptera, Chironomidae) taken from waterbodies of various types in Bryansk region (Russia) and Gomel region (Belarus). Karyotypes of specimens taken from stream pools of the Volga were used as reference samples. The populations of Bryansk and Gomel regions (except for a population of Lake Strativa in Starodubskii district, Bryansk region) exhibit broad structural variation, including somatic mosaicism for morphotypes of the salivary gland chromosome set, decondensation of telomeric sites, and the presence of small structural changes, as opposed to populations of Saratov region. As compared with Saratov and Bryansk regions, the Balbiani ring in the B-arm of chromosome I is repressed in populations of Gomel region. It is concluded that the chromosome set of Ch. plumosus in a range of waterbodies of Bryansk and Gomel regions is unstable.

  13. [Technical efficiency in primary care for patients with diabetes].

    PubMed

    Salinas-Martínez, Ana María; Amaya-Alemán, María Agustina; Arteaga-García, Julio César; Núñez-Rocha, Georgina Mayela; Garza-Elizondo, María Eugenia

    2009-01-01

    To quantify the technical efficiency of diabetes care in family practice settings, characterize the provision of services and health results, and recognize potential sources of variation. We used data envelopment analysis with inputs and outputs for diabetes care from 47 family units within a social security agency in Nuevo Leon. Tobit regression models were also used. Seven units were technically efficient in providing services and nine in achieving health goals. Only two achieved both outcomes. The metropolitan location and the total number of consultations favored efficiency in the provision of services regardless of patient attributes; and the age of the doctor, the efficiency of health results. Performance varied within and among family units; some were efficient at providing services while others at accomplishing health goals. Sources of variation also differed. It is necessary to include both outputs in the study of efficiency of diabetes care in family practice settings.

  14. Vertical structure of medium-scale traveling ionospheric disturbances

    NASA Astrophysics Data System (ADS)

    Ssessanga, Nicholas; Kim, Yong Ha; Kim, Eunsol

    2015-11-01

    We develop an algorithm of computerized ionospheric tomography (CIT) to infer information on the vertical and horizontal structuring of electron density during nighttime medium-scale traveling ionospheric disturbances (MSTIDs). To facilitate digital CIT we have adopted total electron contents (TEC) from a dense Global Positioning System (GPS) receiver network, GEONET, which contains more than 1000 receivers. A multiplicative algebraic reconstruction technique was utilized with a calibrated IRI-2012 model as an initial solution. The reconstructed F2 peak layer varied in altitude with average peak-to-peak amplitude of ~52 km. In addition, the F2 peak layer anticorrelated with TEC variations. This feature supports a theory in which nighttime MSTID is composed of oscillating electric fields due to conductivity variations. Moreover, reconstructed TEC variations over two stations were reasonably close to variations directly derived from the measured TEC data set. Our tomographic analysis may thus help understand three-dimensional structure of MSTIDs in a quantitative way.

  15. HOW WELL ARE HYDRAULIC CONDUCTIVITY VARIATIONS APPROXIMATED BY ADDITIVE STABLE PROCESSES? (R826171)

    EPA Science Inventory

    Abstract

    Analysis of the higher statistical moments of a hydraulic conductivity (K) and an intrinsic permeability (k) data set leads to the conclusion that the increments of the data and the logs of the data are not governed by Levy-stable or Gaussian dis...

  16. Predicting Body Fat Using Data on the BMI

    ERIC Educational Resources Information Center

    Mills, Terence C.

    2005-01-01

    A data set contained in the "Journal of Statistical Education's" data archive provides a way of exploring regression analysis at a variety of teaching levels. An appropriate functional form for the relationship between percentage body fat and the BMI is shown to be the semi-logarithmic, with variation in the BMI accounting for a little over half…

  17. Using dynamic mode decomposition to extract cyclic behavior in the stock market

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Roy, Sukesh; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2016-04-01

    The presence of cyclic expansions and contractions in the economy has been known for over a century. The work reported here searches for similar cyclic behavior in stock valuations. The variations are subtle and can only be extracted through analysis of price variations of a large number of stocks. Koopman mode analysis is a natural approach to establish such collective oscillatory behavior. The difficulty is that even non-cyclic and stochastic constituents of a finite data set may be interpreted as a sum of periodic motions. However, deconvolution of these irregular dynamical facets may be expected to be non-robust, i.e., to depend on specific data set. We propose an approach to differentiate robust and non-robust features in a time series; it is based on identifying robust features with reproducible Koopman modes, i.e., those that persist between distinct sub-groupings of the data. Our analysis of stock data discovered four reproducible modes, one of which has period close to the number of trading days/year. To the best of our knowledge these cycles were not reported previously. It is particularly interesting that the cyclic behaviors persisted through the great recession even though phase relationships between stocks within the modes evolved in the intervening period.

  18. Static aeroelastic analysis and tailoring of missile control fins

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.; Dillenius, M. F. E.

    1989-01-01

    A concept for enhancing the design of control fins for supersonic tactical missiles is described. The concept makes use of aeroelastic tailoring to create fin designs (for given planforms) that limit the variations in hinge moments that can occur during maneuvers involving high load factors and high angles of attack. It combines supersonic nonlinear aerodynamic load calculations with finite-element structural modeling, static and dynamic structural analysis, and optimization. The problem definition is illustrated. The fin is at least partly made up of a composite material. The layup is fixed, and the orientations of the material principal axes are allowed to vary; these are the design variables. The objective is the magnitude of the difference between the chordwise location of the center of pressure and its desired location, calculated for a given flight condition. Three types of constraints can be imposed: upper bounds on static displacements for a given set of load conditions, lower bounds on specified natural frequencies, and upper bounds on the critical flutter damping parameter at a given set of flight speeds and altitudes. The idea is to seek designs that reduce variations in hinge moments that would otherwise occur. The block diagram describes the operation of the computer program that accomplishes these tasks. There is an option for a single analysis in addition to the optimization.

  19. Accuracy of stroke volume variation in predicting fluid responsiveness: a systematic review and meta-analysis.

    PubMed

    Zhang, Zhongheng; Lu, Baolong; Sheng, Xiaoyan; Jin, Ni

    2011-12-01

    Stroke volume variation (SVV) appears to be a good predictor of fluid responsiveness in critically ill patients. However, a wide range of its predictive values has been reported in recent years. We therefore undertook a systematic review and meta-analysis of clinical trials that investigated the diagnostic value of SVV in predicting fluid responsiveness. Clinical investigations were identified from several sources, including MEDLINE, EMBASE, WANFANG, and CENTRAL. Original articles investigating the diagnostic value of SVV in predicting fluid responsiveness were considered to be eligible. Participants included critically ill patients in the intensive care unit (ICU) or operating room (OR) who require hemodynamic monitoring. A total of 568 patients from 23 studies were included in our final analysis. Baseline SVV was correlated to fluid responsiveness with a pooled correlation coefficient of 0.718. Across all settings, we found a diagnostic odds ratio of 18.4 for SVV to predict fluid responsiveness at a sensitivity of 0.81 and specificity of 0.80. The SVV was of diagnostic value for fluid responsiveness in OR or ICU patients monitored with the PiCCO or the FloTrac/Vigileo system, and in patients ventilated with tidal volume greater than 8 ml/kg. SVV is of diagnostic value in predicting fluid responsiveness in various settings.

  20. Methods for Genome-Wide Analysis of Gene Expression Changes in Polyploids

    PubMed Central

    Wang, Jianlin; Lee, Jinsuk J.; Tian, Lu; Lee, Hyeon-Se; Chen, Meng; Rao, Sheetal; Wei, Edward N.; Doerge, R. W.; Comai, Luca; Jeffrey Chen, Z.

    2007-01-01

    Polyploidy is an evolutionary innovation, providing extra sets of genetic material for phenotypic variation and adaptation. It is predicted that changes of gene expression by genetic and epigenetic mechanisms are responsible for novel variation in nascent and established polyploids (Liu and Wendel, 2002; Osborn et al., 2003; Pikaard, 2001). Studying gene expression changes in allopolyploids is more complicated than in autopolyploids, because allopolyploids contain more than two sets of genomes originating from divergent, but related, species. Here we describe two methods that are applicable to the genome-wide analysis of gene expression differences resulting from genome duplication in autopolyploids or interactions between homoeologous genomes in allopolyploids. First, we describe an amplified fragment length polymorphism (AFLP)–complementary DNA (cDNA) display method that allows the discrimination of homoeologous loci based on restriction polymorphisms between the progenitors. Second, we describe microarray analyses that can be used to compare gene expression differences between the allopolyploids and respective progenitors using appropriate experimental design and statistical analysis. We demonstrate the utility of these two complementary methods and discuss the pros and cons of using the methods to analyze gene expression changes in autopolyploids and allopolyploids. Furthermore, we describe these methods in general terms to be of wider applicability for comparative gene expression in a variety of evolutionary, genetic, biological, and physiological contexts. PMID:15865985

  1. Dissecting the space-time structure of tree-ring datasets using the partial triadic analysis.

    PubMed

    Rossi, Jean-Pierre; Nardin, Maxime; Godefroid, Martin; Ruiz-Diaz, Manuela; Sergent, Anne-Sophie; Martinez-Meier, Alejandro; Pâques, Luc; Rozenberg, Philippe

    2014-01-01

    Tree-ring datasets are used in a variety of circumstances, including archeology, climatology, forest ecology, and wood technology. These data are based on microdensity profiles and consist of a set of tree-ring descriptors, such as ring width or early/latewood density, measured for a set of individual trees. Because successive rings correspond to successive years, the resulting dataset is a ring variables × trees × time datacube. Multivariate statistical analyses, such as principal component analysis, have been widely used for extracting worthwhile information from ring datasets, but they typically address two-way matrices, such as ring variables × trees or ring variables × time. Here, we explore the potential of the partial triadic analysis (PTA), a multivariate method dedicated to the analysis of three-way datasets, to apprehend the space-time structure of tree-ring datasets. We analyzed a set of 11 tree-ring descriptors measured in 149 georeferenced individuals of European larch (Larix decidua Miller) during the period of 1967-2007. The processing of densitometry profiles led to a set of ring descriptors for each tree and for each year from 1967-2007. The resulting three-way data table was subjected to two distinct analyses in order to explore i) the temporal evolution of spatial structures and ii) the spatial structure of temporal dynamics. We report the presence of a spatial structure common to the different years, highlighting the inter-individual variability of the ring descriptors at the stand scale. We found a temporal trajectory common to the trees that could be separated into a high and low frequency signal, corresponding to inter-annual variations possibly related to defoliation events and a long-term trend possibly related to climate change. We conclude that PTA is a powerful tool to unravel and hierarchize the different sources of variation within tree-ring datasets.

  2. Using T-Z plots as a graphical method to infer lithological variations from growth strata

    NASA Astrophysics Data System (ADS)

    Castelltort, Sébastien; Pochat, Stéphane; Van Den Driessche, Jean

    2004-08-01

    The 'T-Z plot' method consists of plotting the throw of sedimentary horizons across a growth fault versus their depth in the hanging wall. This method has been initially developed for the analysis of growth fault kinematics from seismic data. A brief analytical examination of such plots shows that they can also provide valuable information about the evolution of fault topography. When growth is a continuous process, stages of topography creation (fault scarp) and filling (of the space available in the hanging-wall) are related to non-dynamic (draping, mud-prone pelagic settling) and dynamic (sand-prone, dynamically deposited) sedimentation, respectively. In this case, the T-Z plot analysis becomes a powerful tool to predict major lithological variations on seismic profiles in faulted settings.

  3. Variation in urinary spot sample, 24 h samples, and longer-term average urinary concentrations of short-lived environmental chemicals: implications for exposure assessment and reverse dosimetry

    PubMed Central

    Aylward, Lesa L; Hays, Sean M; Zidek, Angelika

    2017-01-01

    Population biomonitoring data sets such as the Canadian Health Measures Survey (CHMS) and the United States National Health and Nutrition Examination Survey (NHANES) collect and analyze spot urine samples for analysis for biomarkers of exposure to non-persistent chemicals. Estimation of population intakes using such data sets in a risk-assessment context requires consideration of intra- and inter-individual variability to understand the relationship between variation in the biomarker concentrations and variation in the underlying daily and longer-term intakes. Two intensive data sets with a total of 16 individuals with collection and measurement of serial urine voids over multiple days were used to examine these relationships using methyl paraben, triclosan, bisphenol A (BPA), monoethyl phthalate (MEP), and mono-2-ethylhexyl hydroxyl phthalate (MEHHP) as example compounds. Composited 24 h voids were constructed mathematically from the individual collected voids, and concentrations for each 24 h period and average multiday concentrations were calculated for each individual in the data sets. Geometric mean and 95th percentiles were compared to assess the relationship between distributions in spot sample concentrations and the 24 h and multiday collection averages. In these data sets, spot sample concentrations at the 95th percentile were similar to or slightly higher than the 95th percentile of the distribution of all 24 h composite void concentrations, but tended to overestimate the maximum of the multiday concentration averages for most analytes (usually by less than a factor of 2). These observations can assist in the interpretation of population distributions of spot samples for frequently detected analytes with relatively short elimination half-lives. PMID:27703149

  4. Precessional control of Sr ratios in marginal basins during the Messinian Salinity Crisis?

    NASA Astrophysics Data System (ADS)

    Topper, R. P. M.; Lugli, S.; Manzi, V.; Roveri, M.; Meijer, P. Th.

    2014-05-01

    Based on 87Sr/86Sr data of the Primary Lower Gypsum (PLG) deposits in the Vena del Gesso basin—a marginal basin of the Mediterranean during the Messinian Salinity Crisis—a correlation between 87Sr/86Sr values and precessional forcing has recently been proposed but not yet confirmed. In this study, a box model is set up to represent the Miocene Mediterranean deep basin and a connected marginal basin. Measurements of 87Sr/86Sr in the Vena del Gesso and estimated salinity extrema are used to constrain model results. In an extensive analysis with this model, we assess whether coeval 87Sr/86Sr and salinity fluctuations could have been forced by precession-driven changes in the fresh water budget. A comprehensive set of the controlling parameters is examined to assess the conditions under which precession-driven 87Sr/86Sr variations occur and to determine the most likely setting for PLG formation. Model results show that precession-driven 87Sr/86Sr and salinity fluctuations in marginal basins are produced in settings within a large range of marginal basin sizes, riverine strontium characteristics, amplitudes of precessional fresh water budget variation, and average fresh water budgets of both the marginal and deep basin. PLG deposition most likely occurred when the Atlantic-Mediterranean connection was restricted, and the average fresh water budget in the Mediterranean was significantly less negative than at present day. Considering the large range of settings in which salinities and 87Sr/86Sr fluctuate on a precessional timescale, 87Sr/86Sr variations are expected to be a common feature in PLG deposits in marginal basins of the Mediterranean.

  5. Do key dimensions of seed and seedling functional trait variation capture variation in recruitment probability?

    PubMed

    Larson, Julie E; Sheley, Roger L; Hardegree, Stuart P; Doescher, Paul S; James, Jeremy J

    2016-05-01

    Seedling recruitment is a critical driver of population dynamics and community assembly, yet we know little about functional traits that define different recruitment strategies. For the first time, we examined whether trait relatedness across germination and seedling stages allows the identification of general recruitment strategies which share core functional attributes and also correspond to recruitment outcomes in applied settings. We measured six seed and eight seedling traits (lab- and field-collected, respectively) for 47 varieties of dryland grasses and used principal component analysis (PCA) and cluster analysis to identify major dimensions of trait variation and to isolate trait-based recruitment groups, respectively. PCA highlighted some links between seed and seedling traits, suggesting that relative growth rate and root elongation rate are simultaneously but independently associated with seed mass and initial root mass (first axis), and with leaf dry matter content, specific leaf area, coleoptile tissue density and germination rate (second axis). Third and fourth axes captured separate tradeoffs between hydrothermal time and base water potential for germination, and between specific root length and root mass ratio, respectively. Cluster analysis separated six recruitment types along dimensions of germination and growth rates, but classifications did not correspond to patterns of germination, emergence or recruitment in the field under either of two watering treatments. Thus, while we have begun to identify major threads of functional variation across seed and seedling stages, our understanding of how this variation influences demographic processes-particularly germination and emergence-remains a key gap in functional ecology.

  6. Global Precipitation Analyses at Time Scales of Monthly to 3-Hourly

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric; Einaudi, Franco (Technical Monitor)

    2002-01-01

    Global precipitation analysis covering the last few decades and the impact of the new TRMM precipitation observations are discussed. The 20+ year, monthly, globally complete precipitation analysis of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP) is used to explore global and regional variations and trends and is compared to the much shorter TRMM (Tropical Rainfall Measuring Mission) tropical data set. The GPCP data set shows no significant trend in precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. Regional trends are also analyzed. A trend pattern that is a combination of both El Nino and La Nina precipitation features is evident in the Goodyear data set. This pattern is related to an increase with time in the number of combined months of El Nino and La Nina during the Goodyear period. Monthly anomalies of precipitation are related to ENRON variations with clear signals extending into middle and high latitudes of both hemispheres. The GPCP daily, 1 degree latitude-longitude analysis, which is available from January 1997 to the present is described and the evolution of precipitation patterns on this time scale related to El Nino and La Nina is described. Finally, a TRMM-based Based analysis is described that uses TRMM to calibrate polar-orbit microwave observations from SSM/I and geosynchronous OR observations and merges the various calibrated observations into a final, Baehr resolution map. This TRMM standard product will be available for the entire TRMM period (January Represent). A real-time version of this merged product is being produced and is available at 0.25 degree latitude-longitude resolution over the latitude range from 50 deg. N -50 deg. S. Examples will be shown, including its use in monitoring flood conditions.

  7. Setting up a proper power spectral density (PSD) and autocorrelation analysis for material and process characterization

    NASA Astrophysics Data System (ADS)

    Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.

    2018-03-01

    Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations

  8. Tree-stem diameter fluctuates with the lunar tides and perhaps with geomagnetic activity.

    PubMed

    Barlow, Peter W; Mikulecký, Miroslav; Střeštík, Jaroslav

    2010-11-01

    Our initial objective has been to examine the suggestion of Zürcher et al. (Nature 392:665–666, 1998) that the naturally occurring variations in stem diameter of two experimental trees of Picea alba were related to near simultaneous variations in the lunisolar tidal acceleration. The relationship was positive: Lunar peaks were roughly synchronous with stem diameter peaks. To extend the investigation of this putative relationship, additional data on stem diameter variations from six other tree species were gathered from published literature. Sixteen sets of data were analysed retrospectively using graphical representations as well as cosinor analysis, statistical cross-correlation and cross-spectral analysis, together with estimated values of the lunisolar tidal acceleration corresponding to the sites, dates and times of collection of the biological data. Positive relationships were revealed between the daily variations of stem diameter and the variations of the lunisolar tidal acceleration. Although this relationship could be mediated by a 24.8-h lunar rhythm, the presence of a solar rhythm of 24.0 h could not be ruled out. Studies of transpiration in two of the observed trees indicated that although this variable was not linked to stem diameter variation, it might also be subject to lunisolar gravitational regulation. In three cases, the geomagnetic Thule index showed a weak but reciprocal relationship with stem diameter variation, as well as a positive relationship with the lunisolar tidal force. In conclusion, it seems that lunar gravity alone could influence stem diameter variation and that, under certain circumstances, additional regulation may come from the geomagnetic flux.

  9. Assessment of interpatient heterogeneity in tumor radiosensitivity for nonsmall cell lung cancer using tumor-volume variation data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, Alexei V., E-mail: chvetsov2@gmail.com; Schwartz, Jeffrey L.; Mayr, Nina

    2014-06-15

    Purpose: In our previous work, the authors showed that a distribution of cell surviving fractionsS{sub 2} in a heterogeneous group of patients could be derived from tumor-volume variation curves during radiotherapy for head and neck cancer. In this research study, the authors show that this algorithm can be applied to other tumors, specifically in nonsmall cell lung cancer. This new application includes larger patient volumes and includes comparison of data sets obtained at independent institutions. Methods: Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancermore » with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage computed tomography. Statistical distributions of cell surviving fractionsS{sub 2} and clearance half-lives of lethally damaged cells T{sub 1/2} have been reconstructed in each patient group by using a version of the two-level cell population model of tumor response and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Results: Nonsmall cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractionsS{sub 2} for nonsmall cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S{sub 2} reconstructed from tumor volume variation agree with the PDF measured in vitro. Conclusions: The data obtained in this work, when taken together with the data obtained previously for head and neck cancer, suggests that the cell surviving fractionsS{sub 2} can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.« less

  10. Assessment of interpatient heterogeneity in tumor radiosensitivity for nonsmall cell lung cancer using tumor-volume variation data.

    PubMed

    Chvetsov, Alexei V; Yartsev, Slav; Schwartz, Jeffrey L; Mayr, Nina

    2014-06-01

    In our previous work, the authors showed that a distribution of cell surviving fractions S2 in a heterogeneous group of patients could be derived from tumor-volume variation curves during radiotherapy for head and neck cancer. In this research study, the authors show that this algorithm can be applied to other tumors, specifically in nonsmall cell lung cancer. This new application includes larger patient volumes and includes comparison of data sets obtained at independent institutions. Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancer with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage computed tomography. Statistical distributions of cell surviving fractions S2 and clearance half-lives of lethally damaged cells T(1/2) have been reconstructed in each patient group by using a version of the two-level cell population model of tumor response and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Nonsmall cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractions S2 for nonsmall cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S2 reconstructed from tumor volume variation agree with the PDF measured in vitro. The data obtained in this work, when taken together with the data obtained previously for head and neck cancer, suggests that the cell surviving fractions S2 can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.

  11. Robustness of near-infrared calibration models for the prediction of milk constituents during the milking process.

    PubMed

    Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika

    2013-02-01

    The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.

  12. Fluorescent signatures for variable DNA sequences

    PubMed Central

    Rice, John E.; Reis, Arthur H.; Rice, Lisa M.; Carver-Brown, Rachel K.; Wangh, Lawrence J.

    2012-01-01

    Life abounds with genetic variations writ in sequences that are often only a few hundred nucleotides long. Rapid detection of these variations for identification of genetic diseases, pathogens and organisms has become the mainstay of molecular science and medicine. This report describes a new, highly informative closed-tube polymerase chain reaction (PCR) strategy for analysis of both known and unknown sequence variations. It combines efficient quantitative amplification of single-stranded DNA targets through LATE-PCR with sets of Lights-On/Lights-Off probes that hybridize to their target sequences over a broad temperature range. Contiguous pairs of Lights-On/Lights-Off probes of the same fluorescent color are used to scan hundreds of nucleotides for the presence of mutations. Sets of probes in different colors can be combined in the same tube to analyze even longer single-stranded targets. Each set of hybridized Lights-On/Lights-Off probes generates a composite fluorescent contour, which is mathematically converted to a sequence-specific fluorescent signature. The versatility and broad utility of this new technology is illustrated in this report by characterization of variant sequences in three different DNA targets: the rpoB gene of Mycobacterium tuberculosis, a sequence in the mitochondrial cytochrome C oxidase subunit 1 gene of nematodes and the V3 hypervariable region of the bacterial 16 s ribosomal RNA gene. We anticipate widespread use of these technologies for diagnostics, species identification and basic research. PMID:22879378

  13. To Identify the Important Soil Properties Affecting Dinoseb Adsorption with Statistical Analysis

    PubMed Central

    Guan, Yiqing; Wei, Jianhui; Zhang, Danrong; Zu, Mingjuan; Zhang, Liru

    2013-01-01

    Investigating the influences of soil characteristic factors on dinoseb adsorption parameter with different statistical methods would be valuable to explicitly figure out the extent of these influences. The correlation coefficients and the direct, indirect effects of soil characteristic factors on dinoseb adsorption parameter were analyzed through bivariate correlation analysis, and path analysis. With stepwise regression analysis the factors which had little influence on the adsorption parameter were excluded. Results indicate that pH and CEC had moderate relationship and lower direct effect on dinoseb adsorption parameter due to the multicollinearity with other soil factors, and organic carbon and clay contents were found to be the most significant soil factors which affect the dinoseb adsorption process. A regression is thereby set up to explore the relationship between the dinoseb adsorption parameter and the two soil factors: the soil organic carbon and clay contents. A 92% of the variation of dinoseb sorption coefficient could be attributed to the variation of the soil organic carbon and clay contents. PMID:23737715

  14. Rapid analysis of glucose, fructose, sucrose, and maltose in honeys from different geographic regions using fourier transform infrared spectroscopy and multivariate analysis.

    PubMed

    Wang, Jun; Kliks, Michael M; Jun, Soojin; Jackson, Mel; Li, Qing X

    2010-03-01

    Quantitative analysis of glucose, fructose, sucrose, and maltose in different geographic origin honey samples in the world using the Fourier transform infrared (FTIR) spectroscopy and chemometrics such as partial least squares (PLS) and principal component regression was studied. The calibration series consisted of 45 standard mixtures, which were made up of glucose, fructose, sucrose, and maltose. There were distinct peak variations of all sugar mixtures in the spectral "fingerprint" region between 1500 and 800 cm(-1). The calibration model was successfully validated using 7 synthetic blend sets of sugars. The PLS 2nd-derivative model showed the highest degree of prediction accuracy with a highest R(2) value of 0.999. Along with the canonical variate analysis, the calibration model further validated by high-performance liquid chromatography measurements for commercial honey samples demonstrates that FTIR can qualitatively and quantitatively determine the presence of glucose, fructose, sucrose, and maltose in multiple regional honey samples.

  15. Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng

    2015-08-15

    This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less

  16. Refining Collective Coordinates and Improving Free Energy Representation in Variational Enhanced Sampling.

    PubMed

    Yang, Yi Isaac; Parrinello, Michele

    2018-06-12

    Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.

  17. Analysis of sea ice dynamics

    NASA Technical Reports Server (NTRS)

    Zwally, J.

    1988-01-01

    The ongoing work has established the basis for using multiyear sea ice concentrations from SMMR passive microwave for studies of largescale advection and convergence/divergence of the Arctic sea ice pack. Comparisons were made with numerical model simulations and buoy data showing qualitative agreement on daily to interannual time scales. Analysis of the 7-year SMMR data set shows significant interannual variations in the total area of multiyear ice. The scientific objective is to investigate the dynamics, mass balance, and interannual variability of the Arctic sea ice pack. The research emphasizes the direct application of sea ice parameters derived from passive microwave data (SMMR and SSMI) and collaborative studies using a sea ice dynamics model. The possible causes of observed interannual variations in the multiyear ice area are being examined. The relative effects of variations in the large scale advection and convergence/divergence within the ice pack on a regional and seasonal basis are investigated. The effects of anomolous atmospheric forcings are being examined, including the long-lived effects of synoptic events and monthly variations in the mean geostrophic winds. Estimates to be made will include the amount of new ice production within the ice pack during winter and the amount of ice exported from the pack.

  18. Analysis of the lettuce data from the variable pressure growth chamber at NASA Johnson Space Center: A three-stage nested design model

    NASA Technical Reports Server (NTRS)

    Lee, Tze-San

    1992-01-01

    A model of three-stage nested experimental design was applied to analyze the lettuce data obtained from the variable pressure growth chamber test bed at NASA-Johnson Space Center. From the results of an application of the analysis of variance and covariance on the data set, it was noted that all of the (uncontrollable) factors, Side, Zone, Height and (controllable) PAR (photosynthetically active radiation), had nonhomogeneous effects on the dry weight of the edible biomass of lettuce per pot. Incidentally, the variations accountable to the (uncontrollable) factorial heterogeneities are merely 9 percent and 17 percent of the total variation for both the first and second crop test, respectively. After adjusting for the PAR as a covariate in the no-intercept model, the accountable variations to all the four factors are 94 percent and 92 percent for the first and the second crop test, respectively. With the use of a no-intercept simple linear regression model, the accountable variations to the factor PAR are 92 percent and 90 percent for the first and the second crop test, respectively. Evidently, the (controllable) factor PAR is the dominating one.

  19. Accounting for medical variation: the case of prescribing activity in a New Zealand general practice sample.

    PubMed

    Davis, P B; Yee, R L; Millar, J

    1994-08-01

    Medical practice variation is extensive and well documented, particularly for surgical interventions, and raises important questions for health policy. To date, however, little work has been carried out on interpractitioner variation in prescribing activity in the primary care setting. An analytical model of medical variation is derived from the literature and relevant indicators are identified from a study of New Zealand general practice. The data are based on nearly 9,500 completed patient encounter records drawn from over a hundred practitioners in the Waikato region of the North Island, New Zealand. The data set represents a 1% sample of all weekday general practice office encounters in the Hamilton Health District recorded over a 12-month period. Overall levels of prescribing, and the distribution of drug mentions across diagnostic groupings, are broadly comparable to results drawn from international benchmark data. A multivariate analysis is carried out on seven measures of activity in the areas of prescribing volume, script detail, and therapeutic choice. The analysis indicates that patient, practitioner and practice attributes exert little systematic influence on the prescribing task. The principal influences are diagnosis, followed by practitioner identity. The pattern of findings suggests also that the prescribing task cannot be viewed as an undifferentiated activity. It is more usefully considered as a process of decision-making in which 'core' judgements--such as the decision to prescribe and the choice of drug--are highly predictable and strongly influenced by diagnosis, while 'peripheral' features of the task--such as choosing a combination drug or prescribing generically--are less determinate and more subject to the exercise of clinical discretion.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Common Genetic Variation in Circadian Rhythm Genes and Risk of Epithelial Ovarian Cancer (EOC)

    PubMed Central

    Jim, Heather S.L.; Lin, Hui-Yi; Tyrer, Jonathan P.; Lawrenson, Kate; Dennis, Joe; Chornokur, Ganna; Chen, Zhihua; Chen, Ann Y.; Permuth-Wey, Jennifer; Aben, Katja KH.; Anton-Culver, Hoda; Antonenkova, Natalia; Bruinsma, Fiona; Bandera, Elisa V.; Bean, Yukie T.; Beckmann, Matthias W.; Bisogna, Maria; Bjorge, Line; Bogdanova, Natalia; Brinton, Louise A.; Brooks-Wilson, Angela; Bunker, Clareann H.; Butzow, Ralf; Campbell, Ian G.; Carty, Karen; Chang-Claude, Jenny; Cook, Linda S.; Cramer, Daniel W.; Cunningham, Julie M.; Cybulski, Cezary; Dansonka-Mieszkowska, Agnieszka; du Bois, Andreas; Despierre, Evelyn; Sieh, Weiva; Doherty, Jennifer A.; Dörk, Thilo; Dürst, Matthias; Easton, Douglas F.; Eccles, Diana M.; Edwards, Robert P.; Ekici, Arif B.; Fasching, Peter A.; Fridley, Brooke L.; Gao, Yu-Tang; Gentry-Maharaj, Aleksandra; Giles, Graham G.; Glasspool, Rosalind; Goodman, Marc T.; Gronwald, Jacek; Harter, Philipp; Hasmad, Hanis N.; Hein, Alexander; Heitz, Florian; Hildebrandt, Michelle A.T.; Hillemanns, Peter; Hogdall, Claus K.; Hogdall, Estrid; Hosono, Satoyo; Iversen, Edwin S.; Jakubowska, Anna; Jensen, Allan; Ji, Bu-Tian; Karlan, Beth Y.; Kellar, Melissa; Kiemeney, Lambertus A.; Krakstad, Camilla; Kjaer, Susanne K.; Kupryjanczyk, Jolanta; Vierkant, Robert A.; Lambrechts, Diether; Lambrechts, Sandrina; Le, Nhu D.; Lee, Alice W.; Lele, Shashi; Leminen, Arto; Lester, Jenny; Levine, Douglas A.; Liang, Dong; Lim, Boon Kiong; Lissowska, Jolanta; Lu, Karen; Lubinski, Jan; Lundvall, Lene; Massuger, Leon F.A.G.; Matsuo, Keitaro; McGuire, Valerie; McLaughlin, John R.; McNeish, Ian; Menon, Usha; Milne, Roger L.; Modugno, Francesmary; Thomsen, Lotte; Moysich, Kirsten B.; Ness, Roberta B.; Nevanlinna, Heli; Eilber, Ursula; Odunsi, Kunle; Olson, Sara H.; Orlow, Irene; Orsulic, Sandra; Palmieri Weber, Rachel; Paul, James; Pearce, Celeste L.; Pejovic, Tanja; Pelttari, Liisa M.; Pike, Malcolm C.; Poole, Elizabeth M.; Schernhammer, Eva; Risch, Harvey A.; Rosen, Barry; Rossing, Mary Anne; Rothstein, Joseph H.; Rudolph, Anja; Runnebaum, Ingo B.; Rzepecka, Iwona K.; Salvesen, Helga B.; Schwaab, Ira; Shu, Xiao-Ou; Shvetsov, Yurii B.; Siddiqui, Nadeem; Song, Honglin; Southey, Melissa C.; Spiewankiewicz, Beata; Sucheston-Campbell, Lara; Teo, Soo-Hwang; Terry, Kathryn L.; Thompson, Pamela J.; Tangen, Ingvild L.; Tworoger, Shelley S.; van Altena, Anne M.; Vergote, Ignace; Walsh, Christine S.; Wang-Gohrke, Shan; Wentzensen, Nicolas; Whittemore, Alice S.; Wicklund, Kristine G.; Wilkens, Lynne R.; Wu, Anna H.; Wu, Xifeng; Woo, Yin-Ling; Yang, Hannah; Zheng, Wei; Ziogas, Argyrios; Amankwah, Ernest; Berchuck, Andrew; Schildkraut, Joellen M.; Kelemen, Linda E.; Ramus, Susan J.; Monteiro, Alvaro N.A.; Goode, Ellen L.; Narod, Steven A.; Gayther, Simon A.; Pharoah, Paul D. P.; Sellers, Thomas A.; Phelan, Catherine M.

    2016-01-01

    Disruption in circadian gene expression, whether due to genetic variation or environmental factors (e.g., light at night, shiftwork), is associated with increased incidence of breast, prostate, gastrointestinal and hematologic cancers and gliomas. Circadian genes are highly expressed in the ovaries where they regulate ovulation; circadian disruption is associated with several ovarian cancer risk factors (e.g., endometriosis). However, no studies have examined variation in germline circadian genes as predictors of ovarian cancer risk and invasiveness. The goal of the current study was to examine single nucleotide polymorphisms (SNPs) in circadian genes BMAL1, CRY2, CSNK1E, NPAS2, PER3, REV1 and TIMELESS and downstream transcription factors KLF10 and SENP3 as predictors of risk of epithelial ovarian cancer (EOC) and histopathologic subtypes. The study included a test set of 3,761 EOC cases and 2,722 controls and a validation set of 44,308 samples including 18,174 (10,316 serous) cases and 26,134 controls from 43 studies participating in the Ovarian Cancer Association Consortium (OCAC). Analysis of genotype data from 36 genotyped SNPs and 4600 imputed SNPs indicated that the most significant association was rs117104877 in BMAL1 (OR = 0.79, 95% CI = 0.68–0.90, p = 5.59 × 10−4]. Functional analysis revealed a significant down regulation of BMAL1 expression following cMYC overexpression and increasing transformation in ovarian surface epithelial (OSE) cells as well as alternative splicing of BMAL1 exons in ovarian and granulosa cells. These results suggest that variation in circadian genes, and specifically BMAL1, may be associated with risk of ovarian cancer, likely through disruption of hormonal pathways. PMID:26807442

  1. Common Genetic Variation in Circadian Rhythm Genes and Risk of Epithelial Ovarian Cancer (EOC).

    PubMed

    Jim, Heather S L; Lin, Hui-Yi; Tyrer, Jonathan P; Lawrenson, Kate; Dennis, Joe; Chornokur, Ganna; Chen, Zhihua; Chen, Ann Y; Permuth-Wey, Jennifer; Aben, Katja Kh; Anton-Culver, Hoda; Antonenkova, Natalia; Bruinsma, Fiona; Bandera, Elisa V; Bean, Yukie T; Beckmann, Matthias W; Bisogna, Maria; Bjorge, Line; Bogdanova, Natalia; Brinton, Louise A; Brooks-Wilson, Angela; Bunker, Clareann H; Butzow, Ralf; Campbell, Ian G; Carty, Karen; Chang-Claude, Jenny; Cook, Linda S; Cramer, Daniel W; Cunningham, Julie M; Cybulski, Cezary; Dansonka-Mieszkowska, Agnieszka; du Bois, Andreas; Despierre, Evelyn; Sieh, Weiva; Doherty, Jennifer A; Dörk, Thilo; Dürst, Matthias; Easton, Douglas F; Eccles, Diana M; Edwards, Robert P; Ekici, Arif B; Fasching, Peter A; Fridley, Brooke L; Gao, Yu-Tang; Gentry-Maharaj, Aleksandra; Giles, Graham G; Glasspool, Rosalind; Goodman, Marc T; Gronwald, Jacek; Harter, Philipp; Hasmad, Hanis N; Hein, Alexander; Heitz, Florian; Hildebrandt, Michelle A T; Hillemanns, Peter; Hogdall, Claus K; Hogdall, Estrid; Hosono, Satoyo; Iversen, Edwin S; Jakubowska, Anna; Jensen, Allan; Ji, Bu-Tian; Karlan, Beth Y; Kellar, Melissa; Kiemeney, Lambertus A; Krakstad, Camilla; Kjaer, Susanne K; Kupryjanczyk, Jolanta; Vierkant, Robert A; Lambrechts, Diether; Lambrechts, Sandrina; Le, Nhu D; Lee, Alice W; Lele, Shashi; Leminen, Arto; Lester, Jenny; Levine, Douglas A; Liang, Dong; Lim, Boon Kiong; Lissowska, Jolanta; Lu, Karen; Lubinski, Jan; Lundvall, Lene; Massuger, Leon F A G; Matsuo, Keitaro; McGuire, Valerie; McLaughlin, John R; McNeish, Ian; Menon, Usha; Milne, Roger L; Modugno, Francesmary; Thomsen, Lotte; Moysich, Kirsten B; Ness, Roberta B; Nevanlinna, Heli; Eilber, Ursula; Odunsi, Kunle; Olson, Sara H; Orlow, Irene; Orsulic, Sandra; Palmieri Weber, Rachel; Paul, James; Pearce, Celeste L; Pejovic, Tanja; Pelttari, Liisa M; Pike, Malcolm C; Poole, Elizabeth M; Schernhammer, Eva; Risch, Harvey A; Rosen, Barry; Rossing, Mary Anne; Rothstein, Joseph H; Rudolph, Anja; Runnebaum, Ingo B; Rzepecka, Iwona K; Salvesen, Helga B; Schwaab, Ira; Shu, Xiao-Ou; Shvetsov, Yurii B; Siddiqui, Nadeem; Song, Honglin; Southey, Melissa C; Spiewankiewicz, Beata; Sucheston-Campbell, Lara; Teo, Soo-Hwang; Terry, Kathryn L; Thompson, Pamela J; Tangen, Ingvild L; Tworoger, Shelley S; van Altena, Anne M; Vergote, Ignace; Walsh, Christine S; Wang-Gohrke, Shan; Wentzensen, Nicolas; Whittemore, Alice S; Wicklund, Kristine G; Wilkens, Lynne R; Wu, Anna H; Wu, Xifeng; Woo, Yin-Ling; Yang, Hannah; Zheng, Wei; Ziogas, Argyrios; Amankwah, Ernest; Berchuck, Andrew; Schildkraut, Joellen M; Kelemen, Linda E; Ramus, Susan J; Monteiro, Alvaro N A; Goode, Ellen L; Narod, Steven A; Gayther, Simon A; Pharoah, Paul D P; Sellers, Thomas A; Phelan, Catherine M

    Disruption in circadian gene expression, whether due to genetic variation or environmental factors (e.g., light at night, shiftwork), is associated with increased incidence of breast, prostate, gastrointestinal and hematologic cancers and gliomas. Circadian genes are highly expressed in the ovaries where they regulate ovulation; circadian disruption is associated with several ovarian cancer risk factors (e.g., endometriosis). However, no studies have examined variation in germline circadian genes as predictors of ovarian cancer risk and invasiveness. The goal of the current study was to examine single nucleotide polymorphisms (SNPs) in circadian genes BMAL1, CRY2, CSNK1E, NPAS2, PER3, REV1 and TIMELESS and downstream transcription factors KLF10 and SENP3 as predictors of risk of epithelial ovarian cancer (EOC) and histopathologic subtypes. The study included a test set of 3,761 EOC cases and 2,722 controls and a validation set of 44,308 samples including 18,174 (10,316 serous) cases and 26,134 controls from 43 studies participating in the Ovarian Cancer Association Consortium (OCAC). Analysis of genotype data from 36 genotyped SNPs and 4600 imputed SNPs indicated that the most significant association was rs117104877 in BMAL1 (OR = 0.79, 95% CI = 0.68-0.90, p = 5.59 × 10 -4 ]. Functional analysis revealed a significant down regulation of BMAL1 expression following cMYC overexpression and increasing transformation in ovarian surface epithelial (OSE) cells as well as alternative splicing of BMAL1 exons in ovarian and granulosa cells. These results suggest that variation in circadian genes, and specifically BMAL1 , may be associated with risk of ovarian cancer, likely through disruption of hormonal pathways.

  2. Genetic dissection of ethanol tolerance in the budding yeast Saccharomyces cerevisiae.

    PubMed

    Hu, X H; Wang, M H; Tan, T; Li, J R; Yang, H; Leach, L; Zhang, R M; Luo, Z W

    2007-03-01

    Uncovering genetic control of variation in ethanol tolerance in natural populations of yeast Saccharomyces cerevisiae is essential for understanding the evolution of fermentation, the dominant lifestyle of the species, and for improving efficiency of selection for strains with high ethanol tolerance, a character of great economic value for the brewing and biofuel industries. To date, as many as 251 genes have been predicted to be involved in influencing this character. Candidacy of these genes was determined from a tested phenotypic effect following gene knockout, from an induced change in gene function under an ethanol stress condition, or by mutagenesis. This article represents the first genomics approach for dissecting genetic variation in ethanol tolerance between two yeast strains with a highly divergent trait phenotype. We developed a simple but reliable experimental protocol for scoring the phenotype and a set of STR/SNP markers evenly covering the whole genome. We created a mapping population comprising 319 segregants from crossing the parental strains. On the basis of the data sets, we find that the tolerance trait has a high heritability and that additive genetic variance dominates genetic variation of the trait. Segregation at five QTL detected has explained approximately 50% of phenotypic variation; in particular, the major QTL mapped on yeast chromosome 9 has accounted for a quarter of the phenotypic variation. We integrated the QTL analysis with the predicted candidacy of ethanol resistance genes and found that only a few of these candidates fall in the QTL regions.

  3. Web-TCGA: an online platform for integrated analysis of molecular cancer data sets.

    PubMed

    Deng, Mario; Brägelmann, Johannes; Schultze, Joachim L; Perner, Sven

    2016-02-06

    The Cancer Genome Atlas (TCGA) is a pool of molecular data sets publicly accessible and freely available to cancer researchers anywhere around the world. However, wide spread use is limited since an advanced knowledge of statistics and statistical software is required. In order to improve accessibility we created Web-TCGA, a web based, freely accessible online tool, which can also be run in a private instance, for integrated analysis of molecular cancer data sets provided by TCGA. In contrast to already available tools, Web-TCGA utilizes different methods for analysis and visualization of TCGA data, allowing users to generate global molecular profiles across different cancer entities simultaneously. In addition to global molecular profiles, Web-TCGA offers highly detailed gene and tumor entity centric analysis by providing interactive tables and views. As a supplement to other already available tools, such as cBioPortal (Sci Signal 6:pl1, 2013, Cancer Discov 2:401-4, 2012), Web-TCGA is offering an analysis service, which does not require any installation or configuration, for molecular data sets available at the TCGA. Individual processing requests (queries) are generated by the user for mutation, methylation, expression and copy number variation (CNV) analyses. The user can focus analyses on results from single genes and cancer entities or perform a global analysis (multiple cancer entities and genes simultaneously).

  4. Gold nanoparticles for high-throughput genotyping of long-range haplotypes

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Pan, Dun; Fan, Chunhai; Chen, Jianhua; Huang, Ke; Wang, Dongfang; Zhang, Honglu; Li, You; Feng, Guoyin; Liang, Peiji; He, Lin; Shi, Yongyong

    2011-10-01

    Completion of the Human Genome Project and the HapMap Project has led to increasing demands for mapping complex traits in humans to understand the aetiology of diseases. Identifying variations in the DNA sequence, which affect how we develop disease and respond to pathogens and drugs, is important for this purpose, but it is difficult to identify these variations in large sample sets. Here we show that through a combination of capillary sequencing and polymerase chain reaction assisted by gold nanoparticles, it is possible to identify several DNA variations that are associated with age-related macular degeneration and psoriasis on significant regions of human genomic DNA. Our method is accurate and promising for large-scale and high-throughput genetic analysis of susceptibility towards disease and drug resistance.

  5. Extending simulation modeling to activity-based costing for clinical procedures.

    PubMed

    Glick, N D; Blackmore, C C; Zelman, W N

    2000-04-01

    A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.

  6. Tools for quantifying isotopic niche space and dietary variation at the individual and population level.

    USGS Publications Warehouse

    Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim

    2012-01-01

    Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.

  7. Mesh-free based variational level set evolution for breast region segmentation and abnormality detection using mammograms.

    PubMed

    Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George

    2018-01-01

    Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Gene set analysis of purine and pyrimidine antimetabolites cancer therapies.

    PubMed

    Fridley, Brooke L; Batzler, Anthony; Li, Liang; Li, Fang; Matimba, Alice; Jenkins, Gregory D; Ji, Yuan; Wang, Liewei; Weinshilboum, Richard M

    2011-11-01

    Responses to therapies, either with regard to toxicities or efficacy, are expected to involve complex relationships of gene products within the same molecular pathway or functional gene set. Therefore, pathways or gene sets, as opposed to single genes, may better reflect the true underlying biology and may be more appropriate units for analysis of pharmacogenomic studies. Application of such methods to pharmacogenomic studies may enable the detection of more subtle effects of multiple genes in the same pathway that may be missed by assessing each gene individually. A gene set analysis of 3821 gene sets is presented assessing the association between basal messenger RNA expression and drug cytotoxicity using ethnically defined human lymphoblastoid cell lines for two classes of drugs: pyrimidines [gemcitabine (dFdC) and arabinoside] and purines [6-thioguanine and 6-mercaptopurine]. The gene set nucleoside-diphosphatase activity was found to be significantly associated with both dFdC and arabinoside, whereas gene set γ-aminobutyric acid catabolic process was associated with dFdC and 6-thioguanine. These gene sets were significantly associated with the phenotype even after adjusting for multiple testing. In addition, five associated gene sets were found in common between the pyrimidines and two gene sets for the purines (3',5'-cyclic-AMP phosphodiesterase activity and γ-aminobutyric acid catabolic process) with a P value of less than 0.0001. Functional validation was attempted with four genes each in gene sets for thiopurine and pyrimidine antimetabolites. All four genes selected from the pyrimidine gene sets (PSME3, CANT1, ENTPD6, ADRM1) were validated, but only one (PDE4D) was validated for the thiopurine gene sets. In summary, results from the gene set analysis of pyrimidine and purine therapies, used often in the treatment of various cancers, provide novel insight into the relationship between genomic variation and drug response.

  9. Numeric stratigraphic modeling: Testing sequence Numeric stratigraphic modeling: Testing sequence stratigraphic concepts using high resolution geologic examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.

    1996-08-01

    Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less

  10. The influence of the microscope lamp filament colour temperature on the process of digital images of histological slides acquisition standardization.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz

    2014-01-01

    The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.

  11. Peak tree: a new tool for multiscale hierarchical representation and peak detection of mass spectrometry data.

    PubMed

    Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo

    2011-01-01

    Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.

  12. Joint genetic analysis of hippocampal size in mouse and human identifies a novel gene linked to neurodegenerative disease.

    PubMed

    Ashbrook, David G; Williams, Robert W; Lu, Lu; Stein, Jason L; Hibar, Derrek P; Nichols, Thomas E; Medland, Sarah E; Thompson, Paul M; Hager, Reinmar

    2014-10-03

    Variation in hippocampal volume has been linked to significant differences in memory, behavior, and cognition among individuals. To identify genetic variants underlying such differences and associated disease phenotypes, multinational consortia such as ENIGMA have used large magnetic resonance imaging (MRI) data sets in human GWAS studies. In addition, mapping studies in mouse model systems have identified genetic variants for brain structure variation with great power. A key challenge is to understand how genetically based differences in brain structure lead to the propensity to develop specific neurological disorders. We combine the largest human GWAS of brain structure with the largest mammalian model system, the BXD recombinant inbred mouse population, to identify novel genetic targets influencing brain structure variation that are linked to increased risk for neurological disorders. We first use a novel cross-species, comparative analysis using mouse and human genetic data to identify a candidate gene, MGST3, associated with adult hippocampus size in both systems. We then establish the coregulation and function of this gene in a comprehensive systems-analysis. We find that MGST3 is associated with hippocampus size and is linked to a group of neurodegenerative disorders, such as Alzheimer's.

  13. The effects of target characteristics on fresh crater morphology - Preliminary results for the moon and Mercury

    NASA Technical Reports Server (NTRS)

    Cintala, M. J.; Wood, C. A.; Head, J. W.

    1977-01-01

    The results are reported of an analysis of the characteristics of fresh crater samples occurring on the two major geologic units on the moon (maria and highlands) and on Mercury (smooth plains and cratered terrain). In particular, the onset diameters and abundances of central peaks and terraces are examined and compared for both geologic units on each planet in order to detect any variations that might be due to geologic unit characteristics. The analysis of lunar crater characteristics is based on information provided in the LPL Catalog of Lunar Craters of Wood and Andersson (1977). The Mercurian data set utilized is related to a program involving the cataloguing of Mercurian craters visible in Mariner 10 photography. It is concluded that the characteristics of the substrate have exerted a measurable influence on the occurrence of central peaks, terraces, and scallops in flash crater samples. Therefore, in order to compare the morphologic characteristics of fresh crater populations between planets, an analysis of possible substrate-related differences must first be undertaken for each planet under consideration. It is suggested that large variations in gravity do not produce major variations in crater wall failure.

  14. Systematic pharmacogenomics analysis of a Malay whole genome: proof of concept for personalized medicine.

    PubMed

    Salleh, Mohd Zaki; Teh, Lay Kek; Lee, Lian Shien; Ismet, Rose Iszati; Patowary, Ashok; Joshi, Kandarp; Pasha, Ayesha; Ahmed, Azni Zain; Janor, Roziah Mohd; Hamzah, Ahmad Sazali; Adam, Aishah; Yusoff, Khalid; Hoh, Boon Peng; Hatta, Fazleen Haslinda Mohd; Ismail, Mohamad Izwan; Scaria, Vinod; Sivasubbu, Sridhar

    2013-01-01

    With a higher throughput and lower cost in sequencing, second generation sequencing technology has immense potential for translation into clinical practice and in the realization of pharmacogenomics based patient care. The systematic analysis of whole genome sequences to assess patient to patient variability in pharmacokinetics and pharmacodynamics responses towards drugs would be the next step in future medicine in line with the vision of personalizing medicine. Genomic DNA obtained from a 55 years old, self-declared healthy, anonymous male of Malay descent was sequenced. The subject's mother died of lung cancer and the father had a history of schizophrenia and deceased at the age of 65 years old. A systematic, intuitive computational workflow/pipeline integrating custom algorithm in tandem with large datasets of variant annotations and gene functions for genetic variations with pharmacogenomics impact was developed. A comprehensive pathway map of drug transport, metabolism and action was used as a template to map non-synonymous variations with potential functional consequences. Over 3 million known variations and 100,898 novel variations in the Malay genome were identified. Further in-depth pharmacogenetics analysis revealed a total of 607 unique variants in 563 proteins, with the eventual identification of 4 drug transport genes, 2 drug metabolizing enzyme genes and 33 target genes harboring deleterious SNVs involved in pharmacological pathways, which could have a potential role in clinical settings. The current study successfully unravels the potential of personal genome sequencing in understanding the functionally relevant variations with potential influence on drug transport, metabolism and differential therapeutic outcomes. These will be essential for realizing personalized medicine through the use of comprehensive computational pipeline for systematic data mining and analysis.

  15. Variations in Global Precipitation: Climate-scale to Floods

    NASA Technical Reports Server (NTRS)

    Adler, Robert

    2006-01-01

    Variations in global precipitation from climate-scale to small scale are examined using satellite-based analyses of the Global Precipitation Climatology Project (GPCP) and information from the Tropical Rainfall Measuring Mission (TRMM). Global and large regional rainfall variations and possible long-term changes are examined using the 27- year (1979-2005) monthly dataset from the GPCP. In addition to global patterns associated with phenomena such as ENSO, the data set is explored for evidence of longterm change. Although the global change of precipitation in the data set is near zero, the data set does indicate a small upward trend in the Tropics (25S-25N), especially over ocean. Techniques are derived to isolate and eliminate variations due to ENS0 and major volcanic eruptions and the significance of the trend is examined. The status of TRMM estimates is examined in terms of evaluating and improving the long-term global data set. To look at rainfall variations on a much smaller scale TRMM data is used in combination with observations from other satellites to produce a 3-hr resolution, eight-year data set for examination of weather events and for practical applications such as detecting floods. Characteristics of the data set are presented and examples of recent flood events are examined.

  16. Exploration of the Genetic Organization of Morphological Modularity on the Mouse Mandible Using a Set of Interspecific Recombinant Congenic Strains Between C57BL/6 and Mice of the Mus spretus Species

    PubMed Central

    Burgio, Gaëtan; Baylac, Michel; Heyer, Evelyne; Montagutelli, Xavier

    2012-01-01

    Morphological integration and modularity within semi-autonomous modules are essential mechanisms for the evolution of morphological traits. However, the genetic makeup responsible for the control of variational modularity is still relatively unknown. In our study, we tested the hypothesis that the genetic variation for mandible shape clustered into two morphogenetic components: the alveolar group and the ascending ramus. We used the mouse as a model system to investigate genetics determinants of mandible shape. To do this, we used a combination of geometric morphometric tools and a set of 18 interspecific recombinant congenic strains (IRCS) derived from the distantly related species, Mus spretus SEG/Pas and Mus musculus C57BL/6. Quantitative trait loci (QTL) analysis comparing mandible morphometry between the C57BL/6 and the IRCSs identified 42 putative SEG/Pas segments responsible for the genetic variation. The magnitude of the QTL effects was dependent on the proportion of SEG/Pas genome inherited. Using a multivariate correlation coefficient adapted for modularity assessment and a two-block partial least squares analysis to explore the morphological integration, we found that these QTL clustered into two well-integrated morphogenetic groups, corresponding to the ascending ramus and the alveolar region. Together, these results provide evidence that the mouse mandible is subjected to genetic coordination in a modular manner. PMID:23050236

  17. Gender and Ethnic Variation in Arranged Marriages in a Chinese City

    ERIC Educational Resources Information Center

    Zang, Xiaowei

    2008-01-01

    Using a data set (N = 1,600) collected in the city of Urumchi in 2005, this article examines ethnic differences in arranged marriages in urban China. Data analysis shows a rapid decline in parental arrangement for both Uyghur Muslims and Han Chinese in Urumchi. Han Chinese are less likely than Uyghur Muslims to report arranged marriages, with main…

  18. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    PubMed

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  19. An International Ki67 Reproducibility Study in Adrenal Cortical Carcinoma.

    PubMed

    Papathomas, Thomas G; Pucci, Eugenio; Giordano, Thomas J; Lu, Hao; Duregon, Eleonora; Volante, Marco; Papotti, Mauro; Lloyd, Ricardo V; Tischler, Arthur S; van Nederveen, Francien H; Nose, Vania; Erickson, Lori; Mete, Ozgur; Asa, Sylvia L; Turchini, John; Gill, Anthony J; Matias-Guiu, Xavier; Skordilis, Kassiani; Stephenson, Timothy J; Tissier, Frédérique; Feelders, Richard A; Smid, Marcel; Nigg, Alex; Korpershoek, Esther; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P; de Krijger, Ronald R

    2016-04-01

    Despite the established role of Ki67 labeling index in prognostic stratification of adrenocortical carcinomas and its recent integration into treatment flow charts, the reproducibility of the assessment method has not been determined. The aim of this study was to investigate interobserver variability among endocrine pathologists using a web-based virtual microscopy approach. Ki67-stained slides of 76 adrenocortical carcinomas were analyzed independently by 14 observers, each according to their method of preference including eyeballing, formal manual counting, and digital image analysis. The interobserver variation was statistically significant (P<0.001) in the absence of any correlation between the various methods. Subsequently, 61 static images were distributed among 15 observers who were instructed to follow a category-based scoring approach. Low levels of interobserver (F=6.99; Fcrit=1.70; P<0.001) as well as intraobserver concordance (n=11; Cohen κ ranging from -0.057 to 0.361) were detected. To improve harmonization of Ki67 analysis, we tested the utility of an open-source Galaxy virtual machine application, namely Automated Selection of Hotspots, in 61 virtual slides. The software-provided Ki67 values were validated by digital image analysis in identical images, displaying a strong correlation of 0.96 (P<0.0001) and dividing the cases into 3 classes (cutoffs of 0%-15%-30% and/or 0%-10%-20%) with significantly different overall survivals (P<0.05). We conclude that current practices in Ki67 scoring assessment vary greatly, and interobserver variation sets particular limitations to its clinical utility, especially around clinically relevant cutoff values. Novel digital microscopy-enabled methods could provide critical aid in reducing variation, increasing reproducibility, and improving reliability in the clinical setting.

  20. Statistical total correlation spectroscopy scaling for enhancement of metabolic information recovery in biological NMR spectra.

    PubMed

    Maher, Anthony D; Fonville, Judith M; Coen, Muireann; Lindon, John C; Rae, Caroline D; Nicholson, Jeremy K

    2012-01-17

    The high level of complexity in nuclear magnetic resonance (NMR) metabolic spectroscopic data sets has fueled the development of experimental and mathematical techniques that enhance latent biomarker recovery and improve model interpretability. We previously showed that statistical total correlation spectroscopy (STOCSY) can be used to edit NMR spectra to remove drug metabolite signatures that obscure metabolic variation of diagnostic interest. Here, we extend this "STOCSY editing" concept to a generalized scaling procedure for NMR data that enhances recovery of latent biochemical information and improves biological classification and interpretation. We call this new procedure STOCSY-scaling (STOCSY(S)). STOCSY(S) exploits the fixed proportionality in a set of NMR spectra between resonances from the same molecule to suppress or enhance features correlated with a resonance of interest. We demonstrate this new approach using two exemplar data sets: (a) a streptozotocin rat model (n = 30) of type 1 diabetes and (b) a human epidemiological study utilizing plasma NMR spectra of patients with metabolic syndrome (n = 67). In both cases significant biomarker discovery improvement was observed by using STOCSY(S): the approach successfully suppressed interfering NMR signals from glucose and lactate that otherwise dominate the variation in the streptozotocin study, which then allowed recovery of biomarkers such as glycine, which were otherwise obscured. In the metabolic syndrome study, we used STOCSY(S) to enhance variation from the high-density lipoprotein cholesterol peak, improving the prediction of individuals with metabolic syndrome from controls in orthogonal projections to latent structures discriminant analysis models and facilitating the biological interpretation of the results. Thus, STOCSY(S) is a versatile technique that is applicable in any situation in which variation, either biological or otherwise, dominates a data set at the expense of more interesting or important features. This approach is generally appropriate for many types of NMR-based complex mixture analyses and hence for wider applications in bioanalytical science.

  1. Travel times of P and S from the global digital seismic networks: Implications for the relative variation of P and S velocity in the mantle

    USGS Publications Warehouse

    Bolton, H.; Masters, G.

    2001-01-01

    We present new data sets of P and S arrival times which have been handpicked from long-period vertical and transverse component recordings of the various global seismic networks. Using events which occurred from 1976 to 1994 results in ???38,000 globally well-distributed measurements of teleseismic P and ???41,000 measurements of S. These data are particularly useful for looking at the relative variation of S and P velocities in the lower mantle. We describe both the measurement techniques and the gross characteristics of the data sets. The size of our data sets allows us to exploit the internal consistency of the data to identify outliers using a summary ray analysis. Since the polarity of each arrival is also known, we can construct fault plane solutions and J.or compare with polarities predicted by the Harvard centroid moment tensor solutions to further diagnose phase misidentification. This analysis results in ???5% of the data being identified as outliers. An analysis of variance indicates that the S residual travel times are dominated by the effects of three-dimensional structure but the P data have comparable contributions from noise and source mislocation effects. The summary ray analysis reveals the basic character of lower mantle structure, and there are large-scale patterns in both the S and P data sets that correlate quite well with each other. This analysis suggests that on average, d ln vS J.d. In vP is an increasing function of depth in the mantle going from a value of ???1.7 at the top of the lower mantle to an apparent value of 4 near the base of the mantle. This latter extreme value of R seems to result mainly from data which sample one region in the lowermost mantle under the central Pacific, where large positive S residuals are associated with very small P residuals. Such an anomaly cannot be thermal in origin. Copyright 2001 by the American Geophysical Union.

  2. Variational analysis of the coupling between a geometrically exact Cosserat rod and an elastic continuum

    NASA Astrophysics Data System (ADS)

    Sander, Oliver; Schiela, Anton

    2014-12-01

    We formulate the static mechanical coupling of a geometrically exact Cosserat rod to a nonlinearly elastic continuum. In this setting, appropriate coupling conditions have to connect a one-dimensional model with director variables to a three-dimensional model without directors. Two alternative coupling conditions are proposed, which correspond to two different configuration trace spaces. For both, we show existence of solutions of the coupled problems, using the direct method of the calculus of variations. From the first-order optimality conditions, we also derive the corresponding conditions for the dual variables. These are then interpreted in mechanical terms.

  3. Reevaluation of Stratospheric Ozone Trends From SAGE II Data Using a Simultaneous Temporal and Spatial Analysis

    NASA Technical Reports Server (NTRS)

    Damadeo, R. P.; Zawodny, J. M.; Thomason, L. W.

    2014-01-01

    This paper details a new method of regression for sparsely sampled data sets for use with time-series analysis, in particular the Stratospheric Aerosol and Gas Experiment (SAGE) II ozone data set. Non-uniform spatial, temporal, and diurnal sampling present in the data set result in biased values for the long-term trend if not accounted for. This new method is performed close to the native resolution of measurements and is a simultaneous temporal and spatial analysis that accounts for potential diurnal ozone variation. Results show biases, introduced by the way data is prepared for use with traditional methods, can be as high as 10%. Derived long-term changes show declines in ozone similar to other studies but very different trends in the presumed recovery period, with differences up to 2% per decade. The regression model allows for a variable turnaround time and reveals a hemispheric asymmetry in derived trends in the middle to upper stratosphere. Similar methodology is also applied to SAGE II aerosol optical depth data to create a new volcanic proxy that covers the SAGE II mission period. Ultimately this technique may be extensible towards the inclusion of multiple data sets without the need for homogenization.

  4. Variation of heavy metals in recent sediments from Piratininga Lagoon (Brazil): interpretation of geochemical data with the aid of multivariate analysis

    NASA Astrophysics Data System (ADS)

    Huang, W.; Campredon, R.; Abrao, J. J.; Bernat, M.; Latouche, C.

    1994-06-01

    In the last decade, the Atlantic coast of south-eastern Brazil has been affected by increasing deforestation and anthropogenic effluents. Sediments in the coastal lagoons have recorded the process of such environmental change. Thirty-seven sediment samples from three cores in Piratininga Lagoon, Rio de Janeiro, were analyzed for their major components and minor element concentrations in order to examine geochemical characteristics and the depositional environment and to investigate the variation of heavy metals of environmental concern. Two multivariate analysis methods, principal component analysis and cluster analysis, were performed on the analytical data set to help visualize the sample clusters and the element associations. On the whole, the sediment samples from each core are similar and the sample clusters corresponding to the three cores are clearly separated, as a result of the different conditions of sedimentation. Some changes in the depositional environment are recognized using the results of multivariate analysis. The enrichment of Pb, Cu, and Zn in the upper parts of cores is in agreement with increasing anthropogenic influx (pollution).

  5. Bayesian Analysis Of HMI Solar Image Observables And Comparison To TSI Variations And MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.

    2014-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.

  6. Random forests-based differential analysis of gene sets for gene expression data.

    PubMed

    Hsueh, Huey-Miin; Zhou, Da-Wei; Tsai, Chen-An

    2013-04-10

    In DNA microarray studies, gene-set analysis (GSA) has become the focus of gene expression data analysis. GSA utilizes the gene expression profiles of functionally related gene sets in Gene Ontology (GO) categories or priori-defined biological classes to assess the significance of gene sets associated with clinical outcomes or phenotypes. Many statistical approaches have been proposed to determine whether such functionally related gene sets express differentially (enrichment and/or deletion) in variations of phenotypes. However, little attention has been given to the discriminatory power of gene sets and classification of patients. In this study, we propose a method of gene set analysis, in which gene sets are used to develop classifications of patients based on the Random Forest (RF) algorithm. The corresponding empirical p-value of an observed out-of-bag (OOB) error rate of the classifier is introduced to identify differentially expressed gene sets using an adequate resampling method. In addition, we discuss the impacts and correlations of genes within each gene set based on the measures of variable importance in the RF algorithm. Significant classifications are reported and visualized together with the underlying gene sets and their contribution to the phenotypes of interest. Numerical studies using both synthesized data and a series of publicly available gene expression data sets are conducted to evaluate the performance of the proposed methods. Compared with other hypothesis testing approaches, our proposed methods are reliable and successful in identifying enriched gene sets and in discovering the contributions of genes within a gene set. The classification results of identified gene sets can provide an valuable alternative to gene set testing to reveal the unknown, biologically relevant classes of samples or patients. In summary, our proposed method allows one to simultaneously assess the discriminatory ability of gene sets and the importance of genes for interpretation of data in complex biological systems. The classifications of biologically defined gene sets can reveal the underlying interactions of gene sets associated with the phenotypes, and provide an insightful complement to conventional gene set analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Experimental characterization of seasonal variations in infrasonic traveltimes on the Korean Peninsula with implications for infrasound event location

    NASA Astrophysics Data System (ADS)

    Che, Il-Young; Stump, Brian W.; Lee, Hee-Il

    2011-04-01

    The dependence of infrasound propagation on the season and path environment was quantified by the analysis of more than 1000 repetitive infrasonic ground-truth events at an active, open-pit mine over two years. Blast-associated infrasonic signals were analysed from two infrasound arrays (CHNAR and ULDAR) located at similar distances of 181 and 169 km, respectively, from the source but in different azimuthal directions and with different path environments. The CHNAR array is located to the NW of the source area with primarily a continental path, whereas the ULDAR is located East of the source with a path dominated by open ocean. As a result, CHNAR observations were dominated by stratospheric phases with characteristic celerities of 260-289 m s-1 and large seasonal variations in the traveltime, whereas data from ULDAR consisted primarily of tropospheric phases with larger celerities from 322 to 361 m s-1 and larger daily than seasonal variation in the traveltime. The interpretation of these observations is verified by ray tracing using atmospheric models incorporating daily weather balloon data that characterizes the shallow atmosphere for the two years of the study. Finally, experimental celerity models that included seasonal path effects were constructed from the long-term data set. These experimental celerity models were used to constrain traveltime variations in infrasonic location algorithms providing improved location estimates as illustrated with the empirical data set.

  8. Status and Plans for the WCRP/GEWEX Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.

    2007-01-01

    The Global Precipitation Climatology Project (GPCP) is an international project under the auspices of the World Climate Research Program (WCRP) and GEWEX (Global Water and Energy Experiment). The GPCP group consists of scientists from agencies and universities in various countries that work together to produce a set of global precipitation analyses at time scales of monthly, pentad, and daily. The status of the current products will be briefly summarized, focusing on the monthly analysis. Global and large regional rainfall variations and possible long-term changes are examined using the 27-year (1 979-2005) monthly dataset. In addition to global patterns associated with phenomena such as ENSO, the data set is explored for evidence of long-term change. Although the global change of precipitation in the data set is near zero, the data set does indicate a small upward change in the Tropics (25s-25N) during the period,. especially over ocean. Techniques are derived to isolate and eliminate variations due to ENS0 and major volcanic eruptions and the significance of the linear change is examined. Plans for a GPCP reprocessing for a Version 3 of products, potentially including a fine-time resolution product will be discussed. Current and future links to IPWG will also be addressed.

  9. Large Variation in the Ratio of Mitochondrial to Nuclear Mutation Rate across Animals: Implications for Genetic Diversity and the Use of Mitochondrial DNA as a Molecular Marker.

    PubMed

    Allio, Remi; Donega, Stefano; Galtier, Nicolas; Nabholz, Benoit

    2017-11-01

    It is commonly assumed that mitochondrial DNA (mtDNA) evolves at a faster rate than nuclear DNA (nuDNA) in animals. This has contributed to the popularity of mtDNA as a molecular marker in evolutionary studies. Analyzing 121 multilocus data sets and four phylogenomic data sets encompassing 4,676 species of animals, we demonstrate that the ratio of mitochondrial over nuclear mutation rate is highly variable among animal taxa. In nonvertebrates, such as insects and arachnids, the ratio of mtDNA over nuDNA mutation rate varies between 2 and 6, whereas it is above 20, on average, in vertebrates such as scaled reptiles and birds. Interestingly, this variation is sufficient to explain the previous report of a similar level of mitochondrial polymorphism, on average, between vertebrates and nonvertebrates, which was originally interpreted as reflecting the effect of pervasive positive selection. Our analysis rather indicates that the among-phyla homogeneity in within-species mtDNA diversity is due to a negative correlation between mtDNA per-generation mutation rate and effective population size, irrespective of the action of natural selection. Finally, we explore the variation in the absolute per-year mutation rate of both mtDNA and nuDNA using a reduced data set for which fossil calibration is available, and discuss the potential determinants of mutation rate variation across genomes and taxa. This study has important implications regarding DNA-based identification methods in predicting that mtDNA barcoding should be less reliable in nonvertebrates than in vertebrates. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A system of nonlinear set valued variational inclusions.

    PubMed

    Tang, Yong-Kun; Chang, Shih-Sen; Salahuddin, Salahuddin

    2014-01-01

    In this paper, we studied the existence theorems and techniques for finding the solutions of a system of nonlinear set valued variational inclusions in Hilbert spaces. To overcome the difficulties, due to the presence of a proper convex lower semicontinuous function ϕ and a mapping g which appeared in the considered problems, we have used the resolvent operator technique to suggest an iterative algorithm to compute approximate solutions of the system of nonlinear set valued variational inclusions. The convergence of the iterative sequences generated by algorithm is also proved. 49J40; 47H06.

  11. Metabolic Model-Based Integration of Microbiome Taxonomic and Metabolomic Profiles Elucidates Mechanistic Links between Ecological and Metabolic Variation.

    PubMed

    Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha; Theriot, Casey M; Young, Vincent B; Jansson, Janet K; Fredricks, David N; Borenstein, Elhanan

    2016-01-01

    Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community. Our framework then compares variation in predicted metabolic potential with variation in measured metabolites' abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. Studies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism.

  12. Metabolic Model-Based Integration of Microbiome Taxonomic and Metabolomic Profiles Elucidates Mechanistic Links between Ecological and Metabolic Variation

    PubMed Central

    Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha; Theriot, Casey M.; Young, Vincent B.; Jansson, Janet K.; Fredricks, David N.

    2016-01-01

    ABSTRACT Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community. Our framework then compares variation in predicted metabolic potential with variation in measured metabolites’ abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. IMPORTANCE Studies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism. PMID:27239563

  13. Multiple breath washout analysis in infants: quality assessment and recommendations for improvement.

    PubMed

    Anagnostopoulou, Pinelopi; Egger, Barbara; Lurà, Marco; Usemann, Jakob; Schmidt, Anne; Gorlanova, Olga; Korten, Insa; Roos, Markus; Frey, Urs; Latzin, Philipp

    2016-03-01

    Infant multiple breath washout (MBW) testing serves as a primary outcome in clinical studies. However, it is still unknown whether current software algorithms allow between-centre comparisons. In this study of healthy infants, we quantified MBW measurement errors and tried to improve data quality by simply changing software settings. We analyzed best quality MBW measurements performed with an ultrasonic flowmeter in 24 infants from two centres in Switzerland with the current software settings. To challenge the robustness of these settings, we also used alternative analysis approaches. Using the current analysis software, the coefficient of variation (CV) for functional residual capacity (FRC) differed significantly between centres (mean  ±  SD (%): 9.8  ±  5.6 and 5.8  ±  2.9, respectively, p  =  0.039). In addition, FRC values calculated during the washout differed between  -25 and  +30% from those of the washin of the same tracing. Results were mainly influenced by analysis settings and temperature recordings. Changing few algorithms resulted in significantly more robust analysis. Non-systematic inter-centre differences can be reduced by using correctly recorded environmental data and simple changes in the software algorithms. We provide implications that greatly improve infant MBW outcomes' quality and can be applied when multicentre trials are conducted.

  14. A Quantitative Comparison of the Similarity between Genes and Geography in Worldwide Human Populations

    PubMed Central

    Wang, Chaolong; Zöllner, Sebastian; Rosenberg, Noah A.

    2012-01-01

    Multivariate statistical techniques such as principal components analysis (PCA) and multidimensional scaling (MDS) have been widely used to summarize the structure of human genetic variation, often in easily visualized two-dimensional maps. Many recent studies have reported similarity between geographic maps of population locations and MDS or PCA maps of genetic variation inferred from single-nucleotide polymorphisms (SNPs). However, this similarity has been evident primarily in a qualitative sense; and, because different multivariate techniques and marker sets have been used in different studies, it has not been possible to formally compare genetic variation datasets in terms of their levels of similarity with geography. In this study, using genome-wide SNP data from 128 populations worldwide, we perform a systematic analysis to quantitatively evaluate the similarity of genes and geography in different geographic regions. For each of a series of regions, we apply a Procrustes analysis approach to find an optimal transformation that maximizes the similarity between PCA maps of genetic variation and geographic maps of population locations. We consider examples in Europe, Sub-Saharan Africa, Asia, East Asia, and Central/South Asia, as well as in a worldwide sample, finding that significant similarity between genes and geography exists in general at different geographic levels. The similarity is highest in our examples for Asia and, once highly distinctive populations have been removed, Sub-Saharan Africa. Our results provide a quantitative assessment of the geographic structure of human genetic variation worldwide, supporting the view that geography plays a strong role in giving rise to human population structure. PMID:22927824

  15. A quantitative comparison of the similarity between genes and geography in worldwide human populations.

    PubMed

    Wang, Chaolong; Zöllner, Sebastian; Rosenberg, Noah A

    2012-08-01

    Multivariate statistical techniques such as principal components analysis (PCA) and multidimensional scaling (MDS) have been widely used to summarize the structure of human genetic variation, often in easily visualized two-dimensional maps. Many recent studies have reported similarity between geographic maps of population locations and MDS or PCA maps of genetic variation inferred from single-nucleotide polymorphisms (SNPs). However, this similarity has been evident primarily in a qualitative sense; and, because different multivariate techniques and marker sets have been used in different studies, it has not been possible to formally compare genetic variation datasets in terms of their levels of similarity with geography. In this study, using genome-wide SNP data from 128 populations worldwide, we perform a systematic analysis to quantitatively evaluate the similarity of genes and geography in different geographic regions. For each of a series of regions, we apply a Procrustes analysis approach to find an optimal transformation that maximizes the similarity between PCA maps of genetic variation and geographic maps of population locations. We consider examples in Europe, Sub-Saharan Africa, Asia, East Asia, and Central/South Asia, as well as in a worldwide sample, finding that significant similarity between genes and geography exists in general at different geographic levels. The similarity is highest in our examples for Asia and, once highly distinctive populations have been removed, Sub-Saharan Africa. Our results provide a quantitative assessment of the geographic structure of human genetic variation worldwide, supporting the view that geography plays a strong role in giving rise to human population structure.

  16. Association mapping unveils favorable alleles for grain iron and zinc concentrations in lentil (Lens culinaris subsp. culinaris)

    PubMed Central

    Singh, Akanksha; Sharma, Vinay; Dikshit, Harsh Kumar; Aski, Muraleedhar; Kumar, Harish; Thirunavukkarasu, Nepolean; Patil, Basavanagouda S.; Kumar, Shiv; Sarker, Ashutosh

    2017-01-01

    Lentil is a major cool-season grain legume grown in South Asia, West Asia, and North Africa. Populations in developing countries of these regions have micronutrient deficiencies; therefore, breeding programs should focus more on improving the micronutrient content of food. In the present study, a set of 96 diverse germplasm lines were evaluated at three different locations in India to examine the variation in iron (Fe) and zinc (Zn) concentration and identify simple sequence repeat (SSR) markers that associate with the genetic variation. The genetic variation among genotypes of the association mapping (AM) panel was characterized using a genetic distance-based and a general model-based clustering method. The model-based analysis identified six subpopulations, which satisfactorily explained the genetic structure of the AM panel. AM analysis identified three SSRs (PBALC 13, PBALC 206, and GLLC 563) associated with grain Fe concentration explaining 9% to 11% of phenotypic variation and four SSRs (PBALC 353, SSR 317–1, PLC 62, and PBALC 217) were associated with grain Zn concentration explaining 14%, to 21% of phenotypic variation. These identified SSRs exhibited consistent performance across locations. These candidate SSRs can be used in marker-assisted genetic improvement for developing Fe and Zn fortified lentil varieties. Favorable alleles and promising genotypes identified in this study can be utilized for lentil biofortification. PMID:29161321

  17. Transcriptome analysis of the sea cucumber (Apostichopus japonicus) with variation in individual growth.

    PubMed

    Gao, Lei; He, Chongbo; Bao, Xiangbo; Tian, Meilin; Ma, Zhen

    2017-01-01

    The sea cucumber (Apostichopus japonicus) is an economically important aquaculture species in China. However, the serious individual growth variation often caused financial losses to farmers and the genetic mechanisms are poorly understood. In the present study, the extensively analysis at the transcriptome level for individual growth variation in sea cucumber was carried out. A total of 118946 unigenes were assembled from 255861 transcripts, with N50 of 1700. Of all unigenes, about 23% were identified with at least one significant match to known databases. In all four pair of comparison, 1840 genes were found to be expressed differently. Global hypometabolism was found to be occurred in the slow growing population, based on which the hypothesis was raised that growth retardation in individual growth variation of sea cucumber is one type of dormancy which is used to be against to adverse circumstances. Besides, the pathways such as ECM-receptor interaction and focal adhesion were enriched in the maintenance of cell and tissue structure and communication. Further, 76645 SSRs, 765242 SNPs and 146886 ins-dels were detected in the current study providing an extensive set of data for future studies of genetic mapping and selective breeding. In summary, these results will provides deep insight into the molecular basis of individual growth variation in marine invertebrates, and be valuable for understanding the physiological differences of growth process.

  18. Four applications of a software data collection and analysis methodology

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.

  19. Variational methods for direct/inverse problems of atmospheric dynamics and chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena

    2013-04-01

    We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).

  20. High level of molecular and phenotypic biodiversity in Jatropha curcas from Central America compared to Africa, Asia and South America

    PubMed Central

    2014-01-01

    Background The main bottleneck to elevate jatropha (Jatropha curcas L.) from a wild species to a profitable biodiesel crop is the low genetic and phenotypic variation found in different regions of the world, hampering efficient plant breeding for productivity traits. In this study, 182 accessions from Asia (91), Africa (35), South America (9) and Central America (47) were evaluated at genetic and phenotypic level to find genetic variation and important traits for oilseed production. Results Genetic variation was assessed with SSR (Simple Sequence Repeat), TRAP (Target Region Amplification Polymorphism) and AFLP (Amplified fragment length polymorphism) techniques. Phenotypic variation included seed morphological characteristics, seed oil content and fatty acid composition and early growth traits. Jaccard’s similarity and cluster analysis by UPGM (Unweighted Paired Group Method) with arithmetic mean and PCA (Principle Component Analysis) indicated higher variability in Central American accessions compared to Asian, African and South American accessions. Polymorphism Information Content (PIC) values ranged from 0 to 0.65. In the set of Central American accessions. PIC values were higher than in other regions. Accessions from the Central American population contain alleles that were not found in the accessions from other populations. Analysis of Molecular Variance (AMOVA; P < 0.0001) indicated high genetic variation within regions (81.7%) and low variation across regions (18.3%). A high level of genetic variation was found on early growth traits and on components of the relative growth rate (specific leaf area, leaf weight, leaf weight ratio and net assimilation rate) as indicated by significant differences between accessions and by the high heritability values (50–88%). The fatty acid composition of jatropha oil significantly differed (P < 0.05) between regions. Conclusions The pool of Central American accessions showed very large genetic variation as assessed by DNA-marker variation compared to accessions from other regions. Central American accessions also showed the highest phenotypic variation and should be considered as the most important source for plant breeding. Some variation in early growth traits was found within a group of accessions from Asia and Africa, while these accessions did not differ in a single DNA-marker, possibly indicating epigenetic variation. PMID:24666927

  1. Comparative analysis and visualization of multiple collinear genomes

    PubMed Central

    2012-01-01

    Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897

  2. Development of EMS-induced mutation population for amylose and resistant starch variation in bread wheat (Triticum aestivum) and identification of candidate genes responsible for amylose variation.

    PubMed

    Mishra, Ankita; Singh, Anuradha; Sharma, Monica; Kumar, Pankaj; Roy, Joy

    2016-10-06

    Starch is a major part of cereal grain. It comprises two glucose polymer fractions, amylose (AM) and amylopectin (AP), that make up about 25 and 75 % of total starch, respectively. The ratio of the two affects processing quality and digestibility of starch-based food products. Digestibility determines nutritional quality, as high amylose starch is considered a resistant or healthy starch (RS type 2) and is highly preferred for preventive measures against obesity and related health conditions. The topic of nutrition security is currently receiving much attention and consumer demand for food products with improved nutritional qualities has increased. In bread wheat (Triticum aestivum L.), variation in amylose content is narrow, hence its limited improvement. Therefore, it is necessary to produce wheat lines or populations showing wide variation in amylose/resistant starch content. In this study, a set of EMS-induced M4 mutant lines showing dynamic variation in amylose/resistant starch content were produced. Furthermore, two diverse mutant lines for amylose content were used to study quantitative expression patterns of 20 starch metabolic pathway genes and to identify candidate genes for amylose biosynthesis. A population comprising 101 EMS-induced mutation lines (M4 generation) was produced in a bread wheat (Triticum aestivum) variety. Two methods of amylose measurement in grain starch showed variation in amylose content ranging from ~3 to 76 % in the population. The method of in vitro digestion showed variation in resistant starch content from 1 to 41 %. One-way ANOVA analysis showed significant variation (p < 0.05) in amylose and resistant starch content within the population. A multiple comparison test (Dunnett's test) showed that significant variation in amylose and resistant starch content, with respect to the parent, was observed in about 89 and 38 % of the mutant lines, respectively. Expression pattern analysis of 20 starch metabolic pathway genes in two diverse mutant lines (low and high amylose mutants) showed higher expression of key genes of amylose biosynthesis (GBSSI and their isoforms) in the high amylose mutant line, in comparison to the parent. Higher expression of amylopectin biosynthesis (SBE) was observed in the low amylose mutant lines. An additional six candidate genes showed over-expression (BMY, SPA) and reduced-expression (SSIII, SBEI, SBEIII, ISA3) in the high amylose mutant line, indicating that other starch metabolic genes may also contribute to amylose biosynthesis. In this study a set of 101 EMS-induced mutant lines (M4 generation) showing variation in amylose and resistant starch content in seed were produced. This population serves as useful germplasm or pre-breeding material for genome-wide study and improvement of starch-based processing and nutrition quality in wheat. It is also useful for the study of the genetic and molecular basis of amylose/resistant starch variation in wheat. Furthermore, gene expression analysis of 20 starch metabolic genes in the two diverse mutant lines (low and high amylose mutants) indicates that in addition to key genes, several other genes (such as phosphorylases, isoamylases, and pullulanases) may also be involved in contributing to amylose/amylopectin biosynthesis.

  3. Stability: Conservation laws, Painlevé analysis and exact solutions for S-KP equation in coupled dusty plasma

    NASA Astrophysics Data System (ADS)

    EL-Kalaawy, O. H.; Moawad, S. M.; Wael, Shrouk

    The propagation of nonlinear waves in unmagnetized strongly coupled dusty plasma with Boltzmann distributed electrons, iso-nonthermal distributed ions and negatively charged dust grains is considered. The basic set of fluid equations is reduced to the Schamel Kadomtsev-Petviashvili (S-KP) equation by using the reductive perturbation method. The variational principle and conservation laws of S-KP equation are obtained. It is shown that the S-KP equation is non-integrable using Painlevé analysis. A set of new exact solutions are obtained by auto-Bäcklund transformations. The stability analysis is discussed for the existence of dust acoustic solitary waves (DASWs) and it is found that the physical parameters have strong effects on the stability criterion. In additional to, the electric field and the true Mach number of this solution are investigated. Finally, we will study the physical meanings of solutions.

  4. A Variational Principle for Reconstruction of Elastic Deformations in Shear Deformable Plates and Shells

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Spangler, Jan L.

    2003-01-01

    A variational principle is formulated for the inverse problem of full-field reconstruction of three-dimensional plate/shell deformations from experimentally measured surface strains. The formulation is based upon the minimization of a least squares functional that uses the complete set of strain measures consistent with linear, first-order shear-deformation theory. The formulation, which accommodates for transverse shear deformation, is applicable for the analysis of thin and moderately thick plate and shell structures. The main benefit of the variational principle is that it is well suited for C(sup 0)-continuous displacement finite element discretizations, thus enabling the development of robust algorithms for application to complex civil and aeronautical structures. The methodology is especially aimed at the next generation of aerospace vehicles for use in real-time structural health monitoring systems.

  5. Broadband Spectroscopy Using Two Suzaku Observations of the HMXB GX 301-2

    NASA Technical Reports Server (NTRS)

    Suchy, Slawomir; Fuerst, Felix; Pottschmidt, Katja; Caballero, Isabel; Kreykenbohm, Ingo; Wilms, Joern; Markowitz, Alex; Rothschild, Richard E.

    2012-01-01

    We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a CRSF, multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set of geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.

  6. Broadband Spectroscopy Using Two Suzaku Observations of the HMXB GX 301-2

    NASA Astrophysics Data System (ADS)

    Suchy, Slawomir; Fürst, Felix; Pottschmidt, Katja; Caballero, Isabel; Kreykenbohm, Ingo; Wilms, Jörn; Markowitz, Alex; Rothschild, Richard E.

    2012-02-01

    We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a cyclotron resonance scattering feature (CRSF), multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set of geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.

  7. Thin, Soft, Skin-Mounted Microfluidic Networks with Capillary Bursting Valves for Chrono-Sampling of Sweat.

    PubMed

    Choi, Jungil; Kang, Daeshik; Han, Seungyong; Kim, Sung Bong; Rogers, John A

    2017-03-01

    Systems for time sequential capture of microliter volumes of sweat released from targeted regions of the skin offer the potential to enable analysis of temporal variations in electrolyte balance and biomarker concentration throughout a period of interest. Current methods that rely on absorbent pads taped to the skin do not offer the ease of use in sweat capture needed for quantitative tracking; emerging classes of electronic wearable sweat analysis systems do not directly manage sweat-induced fluid flows for sample isolation. Here, a thin, soft, "skin-like" microfluidic platform is introduced that bonds to the skin to allow for collection and storage of sweat in an interconnected set of microreservoirs. Pressure induced by the sweat glands drives flow through a network of microchannels that incorporates capillary bursting valves designed to open at different pressures, for the purpose of passively guiding sweat through the system in sequential fashion. A representative device recovers 1.8 µL volumes of sweat each from 0.8 min of sweating into a set of separate microreservoirs, collected from 0.03 cm 2 area of skin with approximately five glands, corresponding to a sweat rate of 0.60 µL min -1 per gland. Human studies demonstrate applications in the accurate chemical analysis of lactate, sodium, and potassium concentrations and their temporal variations. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Impact of human population history on distributions of individual-level genetic distance

    PubMed Central

    2005-01-01

    Summaries of human genomic variation shed light on human evolution and provide a framework for biomedical research. Variation is often summarised in terms of one or a few statistics (eg FST and gene diversity). Now that multilocus genotypes for hundreds of autosomal loci are available for thousands of individuals, new approaches are applicable. Recently, trees of individuals and other clustering approaches have demonstrated the power of an individual-focused analysis. We propose analysing the distributions of genetic distances between individuals. Each distribution, or common ancestry profile (CAP), is unique to an individual, and does not require a priori assignment of individuals to populations. Here, we consider a range of models of population history and, using coalescent simulation, reveal the potential insights gained from a set of CAPs. Information lies in the shapes of individual profiles -- sometimes captured by variance of individual CAPs -- and the variation across profiles. Analysis of short tandem repeat genotype data for over 1,000 individuals from 52 populations is consistent with dramatic differences in population histories across human groups. PMID:15814064

  9. A New Look at the Eclipse Timing Variation Diagram Analysis of Selected 3-body W UMa Systems

    NASA Astrophysics Data System (ADS)

    Christopoulou, P.-E.; Papageorgiou, A.

    2015-07-01

    The light travel effect produced by the presence of tertiary components can reveal much about the origin and evolution of over-contact binaries. Monitoring of W UMa systems over the last decade and/or the use of publicly available photometric surveys (NSVS, ASAS, etc.) has uncovered or suggested the presence of many unseen companions, which calls for an in-depth investigation of the parameters derived from cyclic period variations in order to confirm or reject the assumption of hidden companion(s). Progress in the analysis of eclipse timing variations is summarized here both from the empirical and the theoretical points of view, and a more extensive investigation of the proposed orbital parameters of third bodies is proposed. The code we have developed for this, implemented in Python, is set up to handle heuristic scanning with parameter perturbation in parameter space, and to establish realistic uncertainties from the least squares fitting. A computational example is given for TZ Boo, a W UMa system with a spectroscopically detected third component. Future options to be implemented include MCMC and bootstrapping.

  10. Diversity of bile salts in fish and amphibians: evolution of a complex biochemical pathway.

    PubMed

    Hagey, Lee R; Møller, Peter R; Hofmann, Alan F; Krasowski, Matthew D

    2010-01-01

    Bile salts are the major end metabolites of cholesterol and are also important in lipid and protein digestion, as well as shaping of the gut microflora. Previous studies had demonstrated variation of bile salt structures across vertebrate species. We greatly extend prior surveys of bile salt variation in fish and amphibians, particularly in analysis of the biliary bile salts of Agnatha and Chondrichthyes. While there is significant structural variation of bile salts across all fish orders, bile salt profiles are generally stable within orders of fish and do not correlate with differences in diet. This large data set allowed us to infer evolutionary changes in the bile salt synthetic pathway. The hypothesized ancestral bile salt synthetic pathway, likely exemplified in extant hagfish, is simpler and much shorter than the pathway of most teleost fish and terrestrial vertebrates. Thus, the bile salt synthetic pathway has become longer and more complex throughout vertebrate evolution. Analysis of the evolution of bile salt synthetic pathways provides a rich model system for the molecular evolution of a complex biochemical pathway in vertebrates.

  11. Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.

    PubMed

    Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C

    2011-03-01

    Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.

  12. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Simulation of SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Feng, Song-Lin

    2009-11-01

    A three-dimensional finite element model for phase change random access memory (PCRAM) is established for comprehensive electrical and thermal analysis during SET operation. The SET behaviours of the heater addition structure (HS) and the ring-type contact in bottom electrode (RIB) structure are compared with each other. There are two ways to reduce the RESET current, applying a high resistivity interfacial layer and building a new device structure. The simulation results indicate that the variation of SET current with different power reduction ways is little. This study takes the RESET and SET operation current into consideration, showing that the RIB structure PCRAM cell is suitable for future devices with high heat efficiency and high-density, due to its high heat efficiency in RESET operation.

  13. Investigations of SPS Orbit Drifts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drøsdal, Lene; Bracco, Chiara; Cornelis, Karel

    2014-07-01

    The LHC is filled from the last pre-injector, the Super Proton Synchrotron (SPS), via two 3 km long transfer lines, TI 2 and TI 8. Over the LHC injection processes, a drift of the beam trajectories has been observed in TI 2 and TI 8, requiring regular correction of the trajectories, in order to ensure clean injection into the LHC. Investigations of the trajectory variations in the transfer lines showed that the main source of short term trajectory drifts are current variations of the SPS extraction septa (MSE). The stability of the power converters has been improved, but the variationsmore » are still present and further improvements are being investigated. The stability over a longer period of time cannot be explained by this source alone. The analysis of trajectory variations shows that there are also slow variations in the SPS closed orbit at extraction. A set of SPS orbit measurements has been saved and analysed. These observations will be used together with simulations and observed field errors to locate the second source of variations.« less

  14. Variational formulation for dissipative continua and an incremental J-integral

    NASA Astrophysics Data System (ADS)

    Rahaman, Md. Masiur; Dhas, Bensingh; Roy, D.; Reddy, J. N.

    2018-01-01

    Our aim is to rationally formulate a proper variational principle for dissipative (viscoplastic) solids in the presence of inertia forces. As a first step, a consistent linearization of the governing nonlinear partial differential equations (PDEs) is carried out. An additional set of complementary (adjoint) equations is then formed to recover an underlying variational structure for the augmented system of linearized balance laws. This makes it possible to introduce an incremental Lagrangian such that the linearized PDEs, including the complementary equations, become the Euler-Lagrange equations. Continuous groups of symmetries of the linearized PDEs are computed and an analysis is undertaken to identify the variational groups of symmetries of the linearized dissipative system. Application of Noether's theorem leads to the conservation laws (conserved currents) of motion corresponding to the variational symmetries. As a specific outcome, we exploit translational symmetries of the functional in the material space and recover, via Noether's theorem, an incremental J-integral for viscoplastic solids in the presence of inertia forces. Numerical demonstrations are provided through a two-dimensional plane strain numerical simulation of a compact tension specimen of annealed mild steel under dynamic loading.

  15. Comparison of logging residue from lump sum and log scale timber sales.

    Treesearch

    James O Howard; Donald J. DeMars

    1985-01-01

    Data from 1973 and 1980 logging residues studies were used to compare the volume of residue from lump sum and log scale timber sales. Covariance analysis was used to adjust the mean volume for each data set for potential variation resulting from differences in stand conditions. Mean residue volumes from the two sale types were significantly different at the 5-percent...

  16. A protocol for searching the most probable phase-retrieved maps in coherent X-ray diffraction imaging by exploiting the relationship between convergence of the retrieved phase and success of calculation.

    PubMed

    Sekiguchi, Yuki; Hashimoto, Saki; Kobayashi, Amane; Oroguchi, Tomotaka; Nakasako, Masayoshi

    2017-09-01

    Coherent X-ray diffraction imaging (CXDI) is a technique for visualizing the structures of non-crystalline particles with size in the submicrometer to micrometer range in material sciences and biology. In the structural analysis of CXDI, the electron density map of a specimen particle projected along the direction of the incident X-rays can be reconstructed only from the diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction, relying entirely on the computational procedure, sometimes fails because diffraction patterns miss the data in small-angle regions owing to the beam stop and saturation of the detector pixels, and are modified by Poisson noise in X-ray detection. To date, X-ray free-electron lasers have allowed us to collect a large number of diffraction patterns within a short period of time. Therefore, the reconstruction of correct electron density maps is the bottleneck for efficiently conducting structure analyses of non-crystalline particles. To automatically address the correctness of retrieved electron density maps, a data analysis protocol to extract the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a single diffraction pattern is proposed. Through monitoring the variations of the phase values during PR calculations, the tendency for the PR calculations to succeed when the retrieved phase sets converged on a certain value was found. On the other hand, if the phase set was in persistent variation, the PR calculation tended to fail to yield the correct electron density map. To quantify this tendency, here a figure of merit for the variation of the phase values during PR calculation is introduced. In addition, a PR protocol to evaluate the similarity between a map of the highest figure of merit and other independently reconstructed maps is proposed. The protocol is implemented and practically examined in the structure analyses for diffraction patterns from aggregates of gold colloidal particles. Furthermore, the feasibility of the protocol in the structure analysis of organelles from biological cells is examined.

  17. Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests

    PubMed Central

    Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender

    2015-01-01

    Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728

  18. A multifactorial analysis of obesity as CVD risk factor: use of neural network based methods in a nutrigenetics context.

    PubMed

    Valavanis, Ioannis K; Mougiakakou, Stavroula G; Grimaldi, Keith A; Nikita, Konstantina S

    2010-09-08

    Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. The ANN based methods revealed factors that interactively contribute to obesity trait and provided predictive models with a promising generalization ability. In general, results showed that ANNs and their hybrids can provide useful tools for the study of complex traits in the context of nutrigenetics.

  19. Comparision between crustal density and velocity variations in Southern California

    USGS Publications Warehouse

    Langenheim, V.E.; Hauksson, E.

    2001-01-01

    We predict gravity from a three-dimensional Vp model of the upper crust and compare it to the observed isostatic residual gravity field. In general this comparison shows that the isostatic residual gravity field reflects the density variations in the upper to middle crust. Both data sets show similar density variations for the upper crust in areas such as the Peninsular Ranges and the Los Angeles basin. Both show similar variations across major faults, such as the San Andreas and Garlock faults in the Mojave Desert. The difference between the two data sets in regions such as the Salton Trough, the Eastern California Shear Zone, and the eastern Ventura basin (where depth to Moho is <30 km), however, suggests high-density middle to lower crust beneath these regions. Hence the joint interpretation of these data sets improves the depth constraints of crustal density variations.

  20. Independent Confirmation of the Pioneer 10 Anomalous Acceleration

    NASA Technical Reports Server (NTRS)

    Markwardt, Craig B.

    2002-01-01

    I perform an independent analysis of radio Doppler tracking data from the Pioneer 10 spacecraft for the time period 1987-1994. All of the tracking data were taken from public archive sources, and the analysis tools were developed independently by myself. I confirm that an apparent anomalous acceleration is acting on the Pioneer 10 spacecraft, which is not accounted for by present physical models of spacecraft navigation. My best fit value for the acceleration, including corrections for systematic biases and uncertainties, is (8.60 plus or minus 1.34) x 10(exp -8) centimeters per second, directed towards the Sun. This value compares favorably to previous results. I examine the robustness of my result to various perturbations of the analysis method, and find agreement to within plus or minus 5%. The anomalous acceleration is reasonably constant with time, with a characteristic variation time scale of greater than 70 yr. Such a variation timescale is still too short to rule out on-board thermal radiation effects, based on this particular Pioneer 10 data set.

  1. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  2. Statistical analysis of modal parameters of a suspension bridge based on Bayesian spectral density approach and SHM data

    NASA Astrophysics Data System (ADS)

    Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli

    2018-01-01

    Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.

  3. Appropriate threshold levels of cardiac beat-to-beat variation in semi-automatic analysis of equine ECG recordings.

    PubMed

    Flethøj, Mette; Kanters, Jørgen K; Pedersen, Philip J; Haugaard, Maria M; Carstensen, Helena; Olsen, Lisbeth H; Buhl, Rikke

    2016-11-28

    Although premature beats are a matter of concern in horses, the interpretation of equine ECG recordings is complicated by a lack of standardized analysis criteria and a limited knowledge of the normal beat-to-beat variation of equine cardiac rhythm. The purpose of this study was to determine the appropriate threshold levels of maximum acceptable deviation of RR intervals in equine ECG analysis, and to evaluate a novel two-step timing algorithm by quantifying the frequency of arrhythmias in a cohort of healthy adult endurance horses. Beat-to-beat variation differed considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs <60 beats/min (bpm) the threshold level of RR deviation was set at 20%, for HRs in the intermediate range between 60 and 100 bpm the threshold was 10%, and for exercising HRs >100 bpm, the threshold level was 4%. Supraventricular premature beats represented the most prevalent arrhythmia category with varying frequencies in seven horses at rest (median 7, range 2-86) and six horses during exercise (median 2, range 1-24). Beat-to-beat variation of equine cardiac rhythm varies according to HR, and threshold levels in equine ECG analysis should be adjusted accordingly. Standardization of the analysis criteria will enable comparisons of studies and follow-up examinations of patients. A small number of supraventricular premature beats appears to be a normal finding in endurance horses. Further studies are required to validate the findings and determine the clinical significance of premature beats in horses.

  4. Host genetic determinants of microbiota-dependent nutrition revealed by genome-wide analysis of Drosophila melanogaster

    PubMed Central

    Dobson, Adam J.; Chaston, John M.; Newell, Peter D.; Donahue, Leanne; Hermann, Sara L.; Sannino, David R.; Westmiller, Stephanie; Wong, Adam C.-N.; Clark, Andrew G.; Lazzaro, Brian P.; Douglas, Angela E.

    2015-01-01

    Animals bear communities of gut microorganisms with substantial effects on animal nutrition, but the host genetic basis of these effects is unknown. Here, we use Drosophila to demonstrate substantial among-genotype variation in the effects of eliminating the gut microbiota on five host nutritional indices (weight, and protein, lipid, glucose and glycogen contents); this includes variation in both the magnitude and direction of microbiota-dependent effects. Genome-wide associations to identify the genetic basis of the microbiota-dependent variation reveal polymorphisms in largely non-overlapping sets of genes associated with variation in the nutritional traits, including strong representation of conserved genes functioning in signaling. Key genes identified by the GWA study are validated by loss-of-function mutations that altered microbiota-dependent nutritional effects. We conclude that the microbiota interacts with the animal at multiple points in the signaling and regulatory networks that determine animal nutrition. These interactions with the microbiota are likely conserved across animals, including humans. PMID:25692519

  5. Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis

    NASA Astrophysics Data System (ADS)

    Ledoux, Yann; Sergent, Alain; Arrieux, Robert

    2007-05-01

    The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.

  6. Global duricrust on Mars - Analysis of remote-sensing data

    NASA Technical Reports Server (NTRS)

    Jakosky, B. M.; Christensen, P. R.

    1986-01-01

    A study is conducted of the infrared thermal emission, radio thermal emission, and radar reflection data sets with the objective to obtain a simple and self-consistent model for the Mars surface. The results are compared with in situ observations at the Viking Lander sites. Attention is given to thermal inertia values, the abundance of surface rocks, radar/thermal correlations, diurnal temperature deviations, and radio emission data. It is suggested that all of the global remote-sensing data sets considered can be reconciled on the basis of variations in the degree of formation of a case-hardened crust or duricrust. On the other hand, no other model which has been proposed in conjunction with any individual data set can satisfy all of the constraints discussed.

  7. Variation block-based genomics method for crop plants.

    PubMed

    Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon

    2014-06-15

    In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.

  8. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    NASA Astrophysics Data System (ADS)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  9. Patient-physician discussions about costs: definitions and impact on cost conversation incidence estimates.

    PubMed

    Hunter, Wynn G; Hesson, Ashley; Davis, J Kelly; Kirby, Christine; Williamson, Lillie D; Barnett, Jamison A; Ubel, Peter A

    2016-03-31

    Nearly one in three Americans are financially burdened by their medical expenses. To mitigate financial distress, experts recommend routine physician-patient cost conversations. However, the content and incidence of these conversations are unclear, and rigorous definitions are lacking. We sought to develop a novel set of cost conversation definitions, and determine the impact of definitional variation on cost conversation incidence in three clinical settings. Retrospective, mixed-methods analysis of transcribed dialogue from 1,755 outpatient encounters for routine clinical management of breast cancer, rheumatoid arthritis, and depression, occurring between 2010-2014. We developed cost conversation definitions using summative content analysis. Transcripts were evaluated independently by at least two members of our multi-disciplinary team to determine cost conversation incidence using each definition. Incidence estimates were compared using Pearson's Chi-Square Tests. Three cost conversation definitions emerged from our analysis: (a) Out-of-Pocket (OoP) Cost--discussion of the patient's OoP costs for a healthcare service; (b) Cost/Coverage--discussion of the patient's OoP costs or insurance coverage; (c) Cost of Illness- discussion of financial costs or insurance coverage related to health or healthcare. These definitions were hierarchical; OoP Cost was a subset of Cost/Coverage, which was a subset of Cost of Illness. In each clinical setting, we observed significant variation in the incidence of cost conversations when using different definitions; breast oncology: 16, 22, 24% of clinic visits contained cost conversation (OOP Cost, Cost/Coverage, Cost of Illness, respectively; P < 0.001); depression: 30, 38, 43%, (P < 0.001); and rheumatoid arthritis, 26, 33, 35%, (P < 0.001). The estimated incidence of physician-patient cost conversation varied significantly depending on the definition used. Our findings and proposed definitions may assist in retrospective interpretation and prospective design of investigations on this topic.

  10. Variational Bayesian Learning for Wavelet Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  11. A Comparison of Cut Scores Using Multiple Standard Setting Methods.

    ERIC Educational Resources Information Center

    Impara, James C.; Plake, Barbara S.

    This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…

  12. Tectonically controlled sedimentation: impact on sediment supply and basin evolution of the Kashafrud Formation (Middle Jurassic, Kopeh-Dagh Basin, northeast Iran)

    NASA Astrophysics Data System (ADS)

    Sardar Abadi, Mehrdad; Da Silva, Anne-Christine; Amini, Abdolhossein; Aliabadi, Ali Akbar; Boulvain, Frédéric; Sardar Abadi, Mohammad Hossein

    2014-11-01

    The Kashafrud Formation was deposited in the extensional Kopeh-Dagh Basin during the Late Bajocian to Bathonian (Middle Jurassic) and is potentially the most important siliciclastic unit from NE Iran for petroleum geology. This extensional setting allowed the accumulation of about 1,700 m of siliciclastic sediments during a limited period of time (Upper Bajocian-Bathonian). Here, we present a detailed facies analysis combined with magnetic susceptibility (MS) results focusing on the exceptional record of the Pol-e-Gazi section in the southeastern part of the basin. MS is classically interpreted as related to the amount of detrital input. The amount of these detrital inputs and then the MS being classically influenced by sea-level changes, climate changes and tectonic activity. Facies analysis reveals that the studied rocks were deposited in shallow marine, slope to pro-delta settings. A major transgressive-regressive cycle is recorded in this formation, including fluvial-dominated delta to turbiditic pro-delta settings (transgressive phase), followed by siliciclastic to mixed siliciclastic and carbonate shoreface rocks (regressive phase). During the transgressive phase, hyperpycnal currents were feeding the basin. These hyperpycnal currents are interpreted as related to important tectonic variations, in relation to significant uplift of the hinterland during opening of the basin. This tectonic activity was responsible for stronger erosion, providing a higher amount of siliciclastic input into the basin, leading to a high MS signal. During the regressive phase, the tectonic activity strongly decreased. Furthermore, the depositional setting changed to a wave- to tide-dominated, mixed carbonate-siliciclastic setting. Because of the absence of strong tectonic variations, bulk MS was controlled by other factors such as sea-level and climatic changes. Fluctuations in carbonate production, possibly related to sea-level variations, influenced the MS of the siliciclastic/carbonate cycles. Carbonate intervals are characterized by a strong decrease of MS values indicates a gradual reduction of detrital influx. Therefore, the intensity of tectonic movement is thought to be the dominant factor in controlling sediment supply, changes in accommodation space and modes of deposition throughout the Middle Jurassic sedimentary succession in the Pol-e-Gazi section and possibly in the Kopeh-Dagh Basin in general.

  13. Meteorology-induced variations in the spatial behavior of summer ozone pollution in Central California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ling; Harley, Robert A.; Brown, Nancy J.

    Cluster analysis was applied to daily 8 h ozone maxima modeled for a summer season to characterize meteorology-induced variations in the spatial distribution of ozone. Principal component analysis is employed to form a reduced dimension set to describe and interpret ozone spatial patterns. The first three principal components (PCs) capture {approx}85% of total variance, with PC1 describing a general spatial trend, and PC2 and PC3 each describing a spatial contrast. Six clusters were identified for California's San Joaquin Valley (SJV) with two low, three moderate, and one high-ozone cluster. The moderate ozone clusters are distinguished by elevated ozone levels inmore » different parts of the valley: northern, western, and eastern, respectively. The SJV ozone clusters have stronger coupling with the San Francisco Bay area (SFB) than with the Sacramento Valley (SV). Variations in ozone spatial distributions induced by anthropogenic emission changes are small relative to the overall variations in ozone amomalies observed for the whole summer. Ozone regimes identified here are mostly determined by the direct and indirect meteorological effects. Existing measurement sites are sufficiently representative to capture ozone spatial patterns in the SFB and SV, but the western side of the SJV is under-sampled.« less

  14. Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach

    PubMed Central

    Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy

    2014-01-01

    We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901

  15. Variations in Cathodoluminescent Intensity of Spacecraft Materials Exposed to Energetic Electron Bombardment

    NASA Technical Reports Server (NTRS)

    Dekany, Justin; Christensen, Justin; Dennison, J. R.; Jensen, Amberly Evans; Wilson, Gregory; Schneider, Todd; Bowers, Charles W.; Meloy, Robert

    2015-01-01

    Many contemporary spacecraft materials exhibit cathodoluminescence when exposed to electron flux from the space plasma environment. A quantitative, physics-based model has been developed to predict the intensity of the total glow as a function of incident electron current density and energy, temperature, and intrinsic material properties. We present a comparative study of the absolute spectral radiance for more than 20 types of dielectric and composite materials based on this model which spans more than three orders of magnitude. Variations in intensity are contrasted for different electron environments, different sizes of samples and sample sets, different testing and analysis methods, and data acquired at different test facilities. Together, these results allow us to estimate the accuracy and precision to which laboratory studies may be able to determine the response of spacecraft materials in the actual space environment. It also provides guidance as to the distribution of emissions that may be expected for sets of similar flight hardware under similar environmental conditions.

  16. Variations in Cathodoluminescent Intensity of Spacecraft Materials Exposed to Energetic Electron Bombardment

    NASA Technical Reports Server (NTRS)

    Dekany, Justin; Christensen, Justin; Dennison, J. R.; Jensen, Amberly Evans; Wilson, Gregory; Schneider, Todd A.; Bowers, Charles W.; Meloy, Robert

    2014-01-01

    Many contemporary spacecraft materials exhibit cathodoluminescence when exposed to electron flux from the space plasma environment. A quantitative, physics-based model has been developed to predict the intensity of the glow as a function of incident electron current density and energy, temperature, and intrinsic material properties. We present a comparative study of the absolute spectral radiance for several types of dielectric and composite materials based on this model which spans three orders of magnitude. Variations in intensity are contrasted for different electron environments, different sizes of samples and sample sets, different testing and analysis methods, and data acquired at different test facilities. Together, these results allow us to estimate the accuracy and precision to which laboratory studies may be able to determine the response of spacecraft materials in the actual space environment. It also provides guidance as to the distribution of emissions that may be expected for sets of similar flight hardware under similar environmental conditions.

  17. Sensitivity studies with a coupled ice-ocean model of the marginal ice zone

    NASA Technical Reports Server (NTRS)

    Roed, L. P.

    1983-01-01

    An analytical coupled ice-ocean model is considered which is forced by a specified wind stress acting on the open ocean as well as the ice. The analysis supports the conjecture that the upwelling dynamics at ice edges can be understood by means of a simple analytical model. In similarity with coastal problems it is shown that the ice edge upwelling is determined by the net mass flux at the boundaries of the considered region. The model is used to study the sensitivity of the upwelling dynamics in the marginal ice zone to variation in the controlling parameters. These parameters consist of combinations of the drag coefficients used in the parameterization of the stresses on the three interfaces atmosphere-ice, atmosphere-ocean, and ice-ocean. The response is shown to be sensitive to variations in these parameters in that one set of parameters may give upwelling while a slightly different set of parameters may give downwelling.

  18. The management of scabies outbreaks in residential care facilities for the elderly in England: a review of current health protection guidelines.

    PubMed

    White, L C J; Lanza, S; Middleton, J; Hewitt, K; Freire-Moran, L; Edge, C; Nicholls, M; Rajan-Iyer, J; Cassell, J A

    2016-11-01

    Commonly thought of as a disease of poverty and overcrowding in resource-poor settings globally, scabies is also an important public health issue in residential care facilities for the elderly (RCFE) in high-income countries such as the UK. We compared and contrasted current local Health Protection Team (HPT) guidelines for the management of scabies outbreaks in RCFE throughout England. We performed content analysis on 20 guidelines, and used this to create a quantitative report of their variation in key dimensions. Although the guidelines were generally consistent on issues such as the treatment protocols for individual patients, there was substantial variation in their recommendations regarding the prophylactic treatment of contacts, infection control measures and the roles and responsibilities of individual stakeholders. Most guidelines did not adequately address the logistical challenges associated with mass treatment in this setting. We conclude that the heterogeneous nature of the guidelines reviewed is an argument in favour of national guidelines being produced.

  19. Search for possible solar influences in Ra-226 decays

    NASA Astrophysics Data System (ADS)

    Stancil, Daniel D.; Balci Yegen, Sümeyra; Dickey, David A.; Gould, Chris R.

    Measurements of Ra-226 activity from eight HPGe gamma ray detectors at the NC State University PULSTAR Reactor were analyzed for evidence of periodic variations, with particular attention to annual variations. All measurements were made using the same reference source, and data sets were of varying length taken over the time period from September 1996 through August 2014. Clear evidence of annual variations was observed in data from four of the detectors. Short time periodograms from the data sets suggest temporal variability of both the amplitude and frequency of these variations. The annual variations in two of the data sets show peak values near the first of February, while surprisingly, the annual variations in the other two are roughly out of phase with the first two. Three of the four detectors exhibited annual variations over approximately the same time period. A joint statistic constructed by combining spectra from these three shows peaks approximating the frequencies of solar r-mode oscillations with νR = 11.74 cpy, m = 1, and l = 3, 5, 6. The fact that similar variations were not present in all detectors covering similar time periods rules out variations in activity as the cause, and points to differing sensitivities to unspecified environmental parameters instead. In addition to seasonal variations, the modulation of environmental parameters by solar processes remains a possible explanation of periodogram features, but without requiring new physics.

  20. Patient Turnover: A Concept Analysis.

    PubMed

    VanFosson, Christopher A; Yoder, Linda H; Jones, Terry L

    Patient turnover influences the quality and safety of patient care. However, variations in the conceptual underpinnings of patient turnover limit the understanding of the phenomenon. A concept analysis was completed to clarify the role of patient turnover in relation to outcomes in the acute care hospital setting. The defining attributes, antecedents, consequences, and empirical referents of patient turnover were proposed. Nursing leaders should account for patient turnover in workload and staffing calculations. Further research is needed to clarify the influence of patient turnover on the quality and safety of nursing care using a unified understanding of the phenomenon.

  1. Investigating the role of the land surface in explaining the interannual variation of the net radiation balance over the Western Sahara and sub-Sahara

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Nicholson, Sharon

    1987-01-01

    The status of the data sets is discussed. Progress was made in both data analysis and modeling areas. The atmospheric and land surface contributions to the net radiation budget over the Sahara-Sahel region is being decoupled. The interannual variability of these two processes was investigated and this variability related to seasonal rainfall fluctuations. A modified Barnes objective analysis scheme was developed which uses an eliptic scan pattern and a 3-pass iteration of the difference fields.

  2. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  3. Implementation uncertainty when using recreational hunting to manage carnivores

    PubMed Central

    Bischof, Richard; Nilsen, Erlend B; Brøseth, Henrik; Männil, Peep; Ozoliņš, Jaānis; Linnell, John D C; Bode, Michael

    2012-01-01

    1. Wildlife managers often rely on resource users, such as recreational or commercial hunters, to achieve management goals. The use of hunters to control wildlife populations is especially common for predators and ungulates, but managers cannot assume that hunters will always fill annual quotas set by the authorities. It has been advocated that resource management models should account for uncertainty in how harvest rules are realized, requiring that this implementation uncertainty be estimated. 2. We used a survival analysis framework and long-term harvest data from large carnivore management systems in three countries (Estonia, Latvia and Norway) involving four species (brown bear, grey wolf, Eurasian lynx and wolverine) to estimate the performance of hunters with respect to harvest goals set by managers. 3. Variation in hunter quota-filling performance was substantial, ranging from 40% for wolverine in Norway to nearly 100% for lynx in Latvia. Seasonal and regional variation was also high within country–species pairs. We detected a positive relationship between the instantaneous potential to fill a quota slot and the relative availability of the target species for both wolverine and lynx in Norway. 4. Survivor curves and hazards – with survival time measured as the time from the start of a season until a quota slot is filled – can indicate the extent to which managers can influence harvest through adjustments of season duration and quota limits. 5. Synthesis and applications. We investigated seven systems where authorities use recreational hunting to manage large carnivore populations. The variation and magnitude of deviation from harvest goals was substantial, underlining the need to incorporate implementation uncertainty into resource management models and decisions-making. We illustrate how survival analysis can be used by managers to estimate the performance of resource users with respect to achieving harvest goals set by managers. The findings in this study come at an opportune time given the growing popularity of management strategy evaluation (MSE) models in fisheries and a push towards incorporating MSE into terrestrial harvest management. PMID:23197878

  4. The Costs of Delivering Integrated HIV and Sexual Reproductive Health Services in Limited Resource Settings

    PubMed Central

    Obure, Carol Dayo; Sweeney, Sedona; Darsamo, Vanessa; Michaels-Igbokwe, Christine; Guinness, Lorna; Terris-Prestholt, Fern; Muketo, Esther; Nhlabatsi, Zelda; Warren, Charlotte E.; Mayhew, Susannah; Watts, Charlotte; Vassall, Anna

    2015-01-01

    Objective To present evidence on the total costs and unit costs of delivering six integrated sexual reproductive health and HIV services in a high and medium HIV prevalence setting, in order to support policy makers and planners scaling up these essential services. Design A retrospective facility based costing study conducted in 40 non-government organization and public health facilities in Kenya and Swaziland. Methods Economic and financial costs were collected retrospectively for the year 2010/11, from each study site with an aim to estimate the cost per visit of six integrated HIV and SRH services. A full cost analysis using a combination of bottom-up and step-down costing methods was conducted from the health provider’s perspective. The main unit of analysis is the economic unit cost per visit for each service. Costs are converted to 2013 International dollars. Results The mean cost per visit for the HIV/SRH services ranged from $Int 14.23 (PNC visit) to $Int 74.21 (HIV treatment visit). We found considerable variation in the unit costs per visit across settings with family planning services exhibiting the least variation ($Int 6.71-52.24) and STI treatment and HIV treatment visits exhibiting the highest variation in unit cost ranging from ($Int 5.44-281.85) and ($Int 0.83-314.95), respectively. Unit costs of visits were driven by fixed costs while variability in visit costs across facilities was explained mainly by technology used and service maturity. Conclusion For all services, variability in unit costs and cost components suggest that potential exists to reduce costs through better use of both human and capital resources, despite the high proportion of expenditure on drugs and medical supplies. Further work is required to explore the key drivers of efficiency and interventions that may facilitate efficiency improvements. PMID:25933414

  5. Spatial and temporal variation of seismic velocity during earthquakes and volcanic eruptions in western Japan: Insight into mechanism for seismic velocity variation

    NASA Astrophysics Data System (ADS)

    Tsuji, T.; Ikeda, T.; Nimiya, H.

    2017-12-01

    We report spatio-temporal variations of seismic velocity around the seismogenic faults in western Japan. We mainly focus on the seismic velocity variation during (1) the 2016 Off-Mie earthquake in the Nankai subduction zone (Mw5.8) and (2) the 2016 Kumamoto earthquake in Kyushu Island (Mw7.0). We applied seismic interferometry and surface wave analysis to the ambient noise data recorded by Hi-net and DONET seismometers of National Research Institute for Earth Science and Disaster Resilience (NIED). Seismic velocity near the rupture faults and volcano decreased during the earthquake. For example, we observed velocity reduction around the seismogenic Futagawa-Hinagu fault system and Mt Aso in the 2016 Kumamoto earthquake. We also identified velocity increase after the eruptions of Mt Aso. During the 2016 Off-Mie earthquake, we observed seismic velocity variation in the Nankai accretionary prism. After the earthquakes, the seismic velocity gradually returned to the pre-earthquake value. The velocity recovering process (healing process) is caused by several mechanisms, such as pore pressure reduction, strain change, and crack sealing. By showing the velocity variations obtained at different geologic settings (volcano, seismogenic fault, unconsolidated sediment), we discuss the mechanism of seismic velocity variation as well as the post-seismic fault healing process.

  6. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  7. Meta-analysis and Harmonization of Life Cycle Assessment Studies for Algae Biofuels.

    PubMed

    Tu, Qingshi; Eckelman, Matthew; Zimmerman, Julie

    2017-09-05

    Algae biodiesel (BioD) and renewable diesel (RD) have been recognized as potential solutions to mitigating fossil-fuel consumption and the associated environmental issues. Life cycle assessment (LCA) has been used by many researchers to evaluate the potential environmental impacts of these algae-derived fuels, yielding a wide range of results and, in some cases, even differing on indicating whether these fuels are preferred to petroleum-derived fuels or not. This meta-analysis reviews the methodological preferences and results for energy consumption, greenhouse gas emissions, and water consumption for 54 LCA studies that considered algae BioD and RD. The significant variation in reported results can be primarily attributed to the difference in scope, assumptions, and data sources. To minimize the variation in life cycle inventory calculations, a harmonized inventory data set including both nominal and uncertainty data is calculated for each stage of the algae-derived fuel life cycle.

  8. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    PubMed

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.

  9. The business value and cost-effectiveness of genomic medicine.

    PubMed

    Crawford, James M; Aspinall, Mara G

    2012-05-01

    Genomic medicine offers the promise of more effective diagnosis and treatment of human diseases. Genome sequencing early in the course of disease may enable more timely and informed intervention, with reduced healthcare costs and improved long-term outcomes. However, genomic medicine strains current models for demonstrating value, challenging efforts to achieve fair payment for services delivered, both for laboratory diagnostics and for use of molecular information in clinical management. Current models of healthcare reform stipulate that care must be delivered at equal or lower cost, with better patient and population outcomes. To achieve demonstrated value, genomic medicine must overcome many uncertainties: the clinical relevance of genomic variation; potential variation in technical performance and/or computational analysis; management of massive information sets; and must have available clinical interventions that can be informed by genomic analysis, so as to attain more favorable cost management of healthcare delivery and demonstrate improvements in cost-effectiveness.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noecker, Cecilia; Eng, Alexander; Srinivasan, Sujatha

    ABSTRACT Multiple molecular assays now enable high-throughput profiling of the ecology, metabolic capacity, and activity of the human microbiome. However, to date, analyses of such multi-omic data typically focus on statistical associations, often ignoring extensive prior knowledge of the mechanisms linking these various facets of the microbiome. Here, we introduce a comprehensive framework to systematically link variation in metabolomic data with community composition by utilizing taxonomic, genomic, and metabolic information. Specifically, we integrate available and inferred genomic data, metabolic network modeling, and a method for predicting community-wide metabolite turnover to estimate the biosynthetic and degradation potential of a given community.more » Our framework then compares variation in predicted metabolic potential with variation in measured metabolites’ abundances to evaluate whether community composition can explain observed shifts in the community metabolome, and to identify key taxa and genes contributing to the shifts. Focusing on two independent vaginal microbiome data sets, each pairing 16S community profiling with large-scale metabolomics, we demonstrate that our framework successfully recapitulates observed variation in 37% of metabolites. Well-predicted metabolite variation tends to result from disease-associated metabolism. We further identify several disease-enriched species that contribute significantly to these predictions. Interestingly, our analysis also detects metabolites for which the predicted variation negatively correlates with the measured variation, suggesting environmental control points of community metabolism. Applying this framework to gut microbiome data sets reveals similar trends, including prediction of bile acid metabolite shifts. This framework is an important first step toward a system-level multi-omic integration and an improved mechanistic understanding of the microbiome activity and dynamics in health and disease. IMPORTANCEStudies characterizing both the taxonomic composition and metabolic profile of various microbial communities are becoming increasingly common, yet new computational methods are needed to integrate and interpret these data in terms of known biological mechanisms. Here, we introduce an analytical framework to link species composition and metabolite measurements, using a simple model to predict the effects of community ecology on metabolite concentrations and evaluating whether these predictions agree with measured metabolomic profiles. We find that a surprisingly large proportion of metabolite variation in the vaginal microbiome can be predicted based on species composition (including dramatic shifts associated with disease), identify putative mechanisms underlying these predictions, and evaluate the roles of individual bacterial species and genes. Analysis of gut microbiome data using this framework recovers similar community metabolic trends. This framework lays the foundation for model-based multi-omic integrative studies, ultimately improving our understanding of microbial community metabolism.« less

  11. New insights into the genetics of primary open-angle glaucoma based on meta-analyses of intraocular pressure and optic disc characteristics

    PubMed Central

    Springelkamp, Henriët; Iglesias, Adriana I.; Mishra, Aniket; Höhn, René; Wojciechowski, Robert; Khawaja, Anthony P.; Nag, Abhishek; Wang, Ya Xing; Wang, Jie Jin; Cuellar-Partida, Gabriel; Gibson, Jane; Bailey, Jessica N. Cooke; Vithana, Eranga N.; Gharahkhani, Puya; Boutin, Thibaud; Ramdas, Wishal D.; Zeller, Tanja; Luben, Robert N.; Yonova-Doing, Ekaterina; Viswanathan, Ananth C.; Yazar, Seyhan; Cree, Angela J.; Haines, Jonathan L.; Koh, Jia Yu; Souzeau, Emmanuelle; Wilson, James F.; Amin, Najaf; Müller, Christian; Venturini, Cristina; Kearns, Lisa S.; Kang, Jae Hee; Tham, Yih Chung; Zhou, Tiger; van Leeuwen, Elisabeth M.; Nickels, Stefan; Sanfilippo, Paul; Liao, Jiemin; van der Linde, Herma; Zhao, Wanting; van Koolwijk, Leonieke M.E.; Zheng, Li; Rivadeneira, Fernando; Baskaran, Mani; van der Lee, Sven J.; Perera, Shamira; de Jong, Paulus T.V.M.; Oostra, Ben A.; Uitterlinden, André G.; Fan, Qiao; Hofman, Albert; Tai, E-Shyong; Vingerling, Johannes R.; Sim, Xueling; Wolfs, Roger C.W.; Teo, Yik Ying; Lemij, Hans G.; Khor, Chiea Chuen; Willemsen, Rob; Lackner, Karl J.; Aung, Tin; Jansonius, Nomdo M.; Montgomery, Grant; Wild, Philipp S.; Young, Terri L.; Burdon, Kathryn P.; Hysi, Pirro G.; Pasquale, Louis R.; Wong, Tien Yin; Klaver, Caroline C.W.; Hewitt, Alex W.; Jonas, Jost B.; Mitchell, Paul; Lotery, Andrew J.; Foster, Paul J.; Vitart, Veronique; Pfeiffer, Norbert; Craig, Jamie E.; Mackey, David A.; Hammond, Christopher J.; Wiggs, Janey L.; Cheng, Ching-Yu; van Duijn, Cornelia M.; MacGregor, Stuart

    2017-01-01

    Abstract Primary open-angle glaucoma (POAG), the most common optic neuropathy, is a heritable disease. Siblings of POAG cases have a ten-fold increased risk of developing the disease. Intraocular pressure (IOP) and optic nerve head characteristics are used clinically to predict POAG risk. We conducted a genome-wide association meta-analysis of IOP and optic disc parameters and validated our findings in multiple sets of POAG cases and controls. Using imputation to the 1000 genomes (1000G) reference set, we identified 9 new genomic regions associated with vertical cup-disc ratio (VCDR) and 1 new region associated with IOP. Additionally, we found 5 novel loci for optic nerve cup area and 6 for disc area. Previously it was assumed that genetic variation influenced POAG either through IOP or via changes to the optic nerve head; here we present evidence that some genomic regions affect both IOP and the disc parameters. We characterized the effect of the novel loci through pathway analysis and found that pathways involved are not entirely distinct as assumed so far. Further, we identified a novel association between CDKN1A and POAG. Using a zebrafish model we show that six6b (associated with POAG and optic nerve head variation) alters the expression of cdkn1a. In summary, we have identified several novel genes influencing the major clinical risk predictors of POAG and showed that genetic variation in CDKN1A is important in POAG risk. PMID:28073927

  12. Geographic variation in the age- and gender-specific prevalence and incidence of epilepsy: analysis of Taiwanese National Health Insurance-based data.

    PubMed

    Chen, Chih-Chuan; Chen, Li-Sheng; Yen, Ming-Fang; Chen, Hsiu-Hsi; Liou, Horng-Huei

    2012-02-01

    We studied geographic variation in age- and gender-specific prevalence and incidence of epilepsy in four different areas of Taiwan. By using large-scale, National Health Insurance (NHI)-based data from 2000-2003 in Taiwan, we identified 131,287 patients diagnosed with epilepsy (ICD code 345) receiving at least of one of 11 antiepileptic drugs (AEDs). Information on age, gender, and location were also collected. The multivariable Poisson regression analysis was used to assess the heterogeneity of the morbidity of epilepsy in different regions. External data validation was also performed to assess the accuracy of capturing epilepsy cases through our NHI data set. The age-adjusted prevalence and incidence of epilepsy were 5.85 (per 1,000) between 2000 and 2003 and 97 (per 100,000 person-years) during the follow-up time from 2001 to 2003 in Taiwan. The sensitivity and specificity of ICD-9 coding for epilepsy in the NHI data set were 83.91% and 99.83%, respectively, resulting in a slight overestimation. Male patients had a higher probability of having epilepsy than did females. East Taiwan had significantly higher prevalence and incidence than did other areas. The age-specific incidence pattern in east Taiwan was atypical in that it revealed clustering in young and middle-aged groups. Our study demonstrated geographic variation in epidemiologic patterns of epilepsy within Taiwan. The findings are informative and provide insight into the clinical management of epilepsy based on consideration of different target groups in different areas. Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.

  13. Association between genetic variation within vitamin D receptor-DNA binding sites and risk of basal cell carcinoma.

    PubMed

    Lin, Yuan; Chahal, Harvind S; Wu, Wenting; Cho, Hyunje G; Ransohoff, Katherine J; Dai, Hongji; Tang, Jean Y; Sarin, Kavita Y; Han, Jiali

    2017-05-01

    An increasing number of studies have reported a protective association between vitamin D and cancer risk. The vitamin D endocrine system regulates transcriptional programs involved in inflammation, cell growth and differentiation through the binding of vitamin D receptor (VDR) to specific VDR elements. However, limited attention has been given to the role of variation within VDR binding sites in the development of basal cell carcinoma (BCC). Across 2,776 previously identified VDR binding sites, we identified 2,540 independent single-nucleotide polymorphisms (SNPs) and examined their associations with BCC risk in a genome-wide association meta-analysis totaling 17,187 BCC cases and 287,054 controls from two data sets. After multiple testing corrections, we identified two SNPs at new loci (rs16917546 at 10q21.1: odds ratio (OR) = 1.06, p = 3.16 × 10 -7 and rs79824801 at 12q13.3: OR = 1.10, p = 1.88 × 10 -5 ) for the first time as independently related to BCC risk in meta-analysis; and both SNPs were nominally significant in two data sets. In addition, the SNP rs3769823 within VDR binding site at a previously reported BCC susceptibility locus (2q33.1, rs13014235) also exhibited a significant association (OR = 1.12, p = 3.99 × 10 -18 ). A mutually adjusted model suggested that rs3769823 explained the signal in this region. Our findings support the hypothesis that inherited common variation in VDR binding sites affects the development of BCC. © 2017 UICC.

  14. Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs

    NASA Astrophysics Data System (ADS)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won

    2014-09-01

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nun, Isadora; Pichara, Karim; Protopapas, Pavlos

    The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each ofmore » the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.« less

  16. Variations in levels of care between nursing home patients in a public health care system

    PubMed Central

    2014-01-01

    Background Within the setting of a public health service we analyse the distribution of resources between individuals in nursing homes funded by global budgets. Three questions are pursued. Firstly, whether there are systematic variations between nursing homes in the level of care given to patients. Secondly, whether such variations can be explained by nursing home characteristics. And thirdly, how individual need-related variables are associated with differences in the level of care given. Methods The study included 1204 residents in 35 nursing homes and extra care sheltered housing facilities. Direct time spent with patients was recorded. In average each patient received 14.8 hours direct care each week. Multilevel regression analysis is used to analyse the relationship between individual characteristics, nursing home characteristics and time spent with patients in nursing homes. The study setting is the city of Trondheim, with a population of approximately 180 000. Results There are large variations between nursing homes in the total amount of individual care given to patients. As much as 24 percent of the variation of individual care between patients could be explained by variation between nursing homes. Adjusting for structural nursing home characteristics did not substantially reduce the variation between nursing homes. As expected a negative association was found between individual care and case-mix, implying that at nursing home level a more resource demanding case-mix is compensated by lowering the average amount of care. At individual level ADL-disability is the strongest predictor for use of resources in nursing homes. For the average user one point increase in ADL-disability increases the use of resources with 27 percent. Conclusion In a financial reimbursement model for nursing homes with no adjustment for case-mix, the amount of care patients receive does not solely depend on the patients’ own needs, but also on the needs of all the other residents. PMID:24597468

  17. Weevil x Insecticide: Does 'Personality' Matter?

    PubMed

    Morales, Juliana A; Cardoso, Danúbia G; Della Lucia, Terezinha Maria C; Guedes, Raul Narciso C

    2013-01-01

    An insect's behavior is the expression of its integrated physiology in response to external and internal stimuli, turning insect behavior into a potential determinant of insecticide exposure. Behavioral traits may therefore influence insecticide efficacy against insects, compromising the validity of standard bioassays of insecticide activity, which are fundamentally based on lethality alone. By extension, insect 'personality' (i.e., an individual's integrated set of behavioral tendencies that is inferred from multiple empirical measures) may also be an important determinant of insecticide exposure and activity. This has yet to be considered because the behavioral studies involving insects and insecticides focus on populations rather than on individuals. Even among studies of animal 'personality', the relative contributions of individual and population variation are usually neglected. Here, we assessed behavioral traits (within the categories: activity, boldness/shyness, and exploration/avoidance) of individuals from 15 populations of the maize weevil (Sitophilus zeamais), an important stored-grain pest with serious problems of insecticide resistance, and correlated the behavioral responses with the activity of the insecticide deltamethrin. This analysis was performed at both the population and individual levels. There was significant variation in weevil 'personality' among individuals and populations, but variation among individuals within populations accounted for most of the observed variation (92.57%). This result emphasizes the importance of individual variation in behavioral and 'personality' studies. When the behavioral traits assessed were correlated with median lethal time (LT50) at the population level and with the survival time under insecticide exposure, activity traits, particularly the distance walked, significantly increased survival time. Therefore, behavioral traits are important components of insecticide efficacy, and individual variation should be considered in such studies. This is so because population differences provided only crude approximation of the individual personality in a restrained experimental setting likely to restrict individual behavior favoring the transposition of the individual variation to the population.

  18. Variations in levels of care between nursing home patients in a public health care system.

    PubMed

    Døhl, Øystein; Garåsen, Helge; Kalseth, Jorid; Magnussen, Jon

    2014-03-05

    Within the setting of a public health service we analyse the distribution of resources between individuals in nursing homes funded by global budgets. Three questions are pursued. Firstly, whether there are systematic variations between nursing homes in the level of care given to patients. Secondly, whether such variations can be explained by nursing home characteristics. And thirdly, how individual need-related variables are associated with differences in the level of care given. The study included 1204 residents in 35 nursing homes and extra care sheltered housing facilities. Direct time spent with patients was recorded. In average each patient received 14.8 hours direct care each week. Multilevel regression analysis is used to analyse the relationship between individual characteristics, nursing home characteristics and time spent with patients in nursing homes. The study setting is the city of Trondheim, with a population of approximately 180 000. There are large variations between nursing homes in the total amount of individual care given to patients. As much as 24 percent of the variation of individual care between patients could be explained by variation between nursing homes. Adjusting for structural nursing home characteristics did not substantially reduce the variation between nursing homes. As expected a negative association was found between individual care and case-mix, implying that at nursing home level a more resource demanding case-mix is compensated by lowering the average amount of care. At individual level ADL-disability is the strongest predictor for use of resources in nursing homes. For the average user one point increase in ADL-disability increases the use of resources with 27 percent. In a financial reimbursement model for nursing homes with no adjustment for case-mix, the amount of care patients receive does not solely depend on the patients' own needs, but also on the needs of all the other residents.

  19. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  20. Measurement of the human allele frequency spectrum demonstrates greater genetic drift in East Asians than in Europeans.

    PubMed

    Keinan, Alon; Mullikin, James C; Patterson, Nick; Reich, David

    2007-10-01

    Large data sets on human genetic variation have been collected recently, but their usefulness for learning about history and natural selection has been limited by biases in the ways polymorphisms were chosen. We report large subsets of SNPs from the International HapMap Project that allow us to overcome these biases and to provide accurate measurement of a quantity of crucial importance for understanding genetic variation: the allele frequency spectrum. Our analysis shows that East Asian and northern European ancestors shared the same population bottleneck expanding out of Africa but that both also experienced more recent genetic drift, which was greater in East Asians.

  1. Monitoring of endogenous carbon monoxide dynamics in human breath by tunable diode laser

    NASA Astrophysics Data System (ADS)

    Stepanov, Eugene V.; Daraselia, Mikhail V.; Zyrianov, Pavel V.; Shulagin, Yurii A.; Skrupskii, Vladimir A.

    1996-01-01

    High sensitive CO gas analyzer based on tunable diode laser (TDL) was used as a real time monitor of endogenous carbon monoxide in a set of breath physiology experiments. The measurements of the CO content dynamics in exhaled air with 10 ppb sensitivity were attended with detection of carbon dioxide and O2 in breath, lung ventilation parameters, heart rate and blood analysis using conventional techniques. Temporal variations of endogenous CO in human breath caused by hyperoxia, hypoxia, hyperventilation and sport loading were first studied in real time. Scattering of the CO variation time constants was observed for different tested persons. Possible reasons for this scattering related with the organisms' physiology peculiarities are discussed.

  2. Endogenous CO dynamics monitoring in breath by tunable diode laser

    NASA Astrophysics Data System (ADS)

    Kouznetsov, Andrian I.; Stepanov, Eugene V.; Shulagin, Yurii A.; Skrupskii, Vladimir A.

    1996-04-01

    High sensitive CO gas analyzer based on tunable diode laser (TDL) was used as a real time monitor of endogenous carbon monoxide in a set of breath physiology experiments. The measurements of the CO content dynamics in exhaled air with 10 ppb sensitivity were attended with detection of carbon dioxide and O2 in breath, lung ventilation parameters, heart rate and blood analysis using conventional techniques. Variations of endogenous CO in human breath caused by hyperoxia, hypoxia, hyperventilation as well as sport loading were studied in real time. Scattering of the CO variation time constants was observed for different tested persons. Possible reasons for this scattering related with the organisms' physiology peculiarities are discussed.

  3. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  4. Efficient rehabilitation care for joint replacement patients: skilled nursing facility or inpatient rehabilitation facility?

    PubMed

    Tian, Wenqiang; DeJong, Gerben; Horn, Susan D; Putman, Koen; Hsieh, Ching-Hui; DaVanzo, Joan E

    2012-01-01

    There has been lengthy debate as to which setting, skilled nursing facility (SNF) or inpatient rehabilitation facility (IRF), is more efficient in treating joint replacement patients. This study aims to determine the efficiency of rehabilitation care provided by SNF and IRF to joint replacement patients with respect to both payment and length of stay (LOS). This study used a prospective multisite observational cohort design. Tobit models were used to examine the association between setting of care and efficiency. The study enrolled 948 knee replacement patients and 618 hip replacement patients from 11 IRFs and 7 SNFs between February 2006 and February 2007. Output was measured by motor functional independence measure (FIM) score at discharge. Efficiency was measured in 3 ways: payment efficiency, LOS efficiency, and stochastic frontier analysis efficiency. IRF patients incurred higher expenditures per case but also achieved larger motor FIM gains in shorter LOS than did SNF patients. Setting of care was not a strong predictor of overall efficiency of rehabilitation care. Great variation in characteristics existed within IRFs or SNFs and severity groups. Medium-volume facilities among both SNFs and IRFs were most efficient. Early rehabilitation was consistently predictive of efficient treatment. The advantage of either setting is not clear-cut. Definition of efficiency depends in part on preference between cost and time. SNFs are more payment efficient; IRFs are more LOS efficient. Variation within SNFs and IRFs blurred setting differences; a simple comparison between SNF and IRF may not be appropriate.

  5. Use of Order Sets in Inpatient Computerized Provider Order Entry Systems: A Comparative Analysis of Usage Patterns at Seven Sites

    PubMed Central

    Wright, Adam; Feblowitz, Joshua C.; Pang, Justine E.; Carpenter, James D.; Krall, Michael A.; Middleton, Blackford; Sittig, Dean F.

    2012-01-01

    Background Many computerized provider order entry (CPOE) systems include the ability to create electronic order sets: collections of clinically-related orders grouped by purpose. Order sets promise to make CPOE systems more efficient, improve care quality and increase adherence to evidence-based guidelines. However, the development and implementation of order sets can be expensive and time-consuming and limited literature exists about their utilization. Methods Based on analysis of order set usage logs from a diverse purposive sample of seven sites with commercially- and internally-developed inpatient CPOE systems, we developed an original order set classification system. Order sets were categorized across seven non-mutually exclusive axes: admission/discharge/transfer (ADT), perioperative, condition-specific, task-specific, service-specific, convenience, and personal. In addition, 731 unique subtypes were identified within five axes: four in ADT (S=4), three in perioperative, 144 in condition-specific, 513 in task-specific, and 67 in service-specific. Results Order sets (n=1,914) were used a total of 676,142 times at the participating sites during a one-year period. ADT and perioperative order sets accounted for 27.6% and 24.2% of usage respectively. Peripartum/labor, chest pain/Acute Coronary Syndrome/Myocardial Infarction and diabetes order sets accounted for 51.6% of condition-specific usage. Insulin, angiography/angioplasty and arthroplasty order sets accounted for 19.4% of task-specific usage. Emergency/trauma, Obstetrics/Gynecology/Labor Delivery and anesthesia accounted for 32.4% of service-specific usage. Overall, the top 20% of order sets accounted for 90.1% of all usage. Additional salient patterns are identified and described. Conclusion We observed recurrent patterns in order set usage across multiple sites as well as meaningful variations between sites. Vendors and institutional developers should identify high-value order set types through concrete data analysis in order to optimize the resources devoted to development and implementation. PMID:22819199

  6. Ozone reference models for the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Keating, G. M.; Pitts, M. C.; Young, D. F.

    1990-01-01

    Data on monthly latitudinal variations in middle-atmosphere vertical ozone profiles are presented, based on extensive Nimbus-7, AE-2, and SME satellite measurements from the period 1978-1982. The coverage of the data sets, the characteristics of the sensors, and the analysis techniques applied are described, and the results are compiled in tables and graphs. These ozone data are intended to supplement the models of the 1986 COSPAR International Reference Atmosphere.

  7. [Automated analysis of bacterial preparations manufactured on automatic heat fixation and staining equipment].

    PubMed

    2012-01-01

    Heat fixation of preparations was made in the fixation bath designed by EMKO (Russia). Programmable "Emkosteiner" (EMKO, Russia) was used for trial staining. Reagents set Micko-GRAM-NITsF was applied for Gram's method of staining. It was demostrated that automatic smear fixation equipment and programmable staining ensure high-quality imaging (1% chromaticity variation) good enough for standardization of Gram's staining of microbial preparations.

  8. Drinking Water Turbidity and Emergency Department Visits for Gastrointestinal Illness in New York City, 2002-2009

    PubMed Central

    Hsieh, Jennifer L.; Nguyen, Trang Quyen; Matte, Thomas; Ito, Kazuhiko

    2015-01-01

    Background Studies have examined whether there is a relationship between drinking water turbidity and gastrointestinal (GI) illness indicators, and results have varied possibly due to differences in methods and study settings. Objectives As part of a water security improvement project we conducted a retrospective analysis of the relationship between drinking water turbidity and GI illness in New York City (NYC) based on emergency department chief complaint syndromic data that are available in near-real-time. Methods We used a Poisson time-series model to estimate the relationship of turbidity measured at distribution system and source water sites to diarrhea emergency department (ED) visits in NYC during 2002-2009. The analysis assessed age groups and was stratified by season and adjusted for sub-seasonal temporal trends, year-to-year variation, ambient temperature, day-of-week, and holidays. Results Seasonal variation unrelated to turbidity dominated (~90% deviance) the variation of daily diarrhea ED visits, with an additional 0.4% deviance explained with turbidity. Small yet significant multi-day lagged associations were found between NYC turbidity and diarrhea ED visits in the spring only, with approximately 5% excess risk per inter-quartile-range of NYC turbidity peaking at a 6 day lag. This association was strongest among those aged 0-4 years and was explained by the variation in source water turbidity. Conclusions Integrated analysis of turbidity and syndromic surveillance data, as part of overall drinking water surveillance, may be useful for enhanced situational awareness of possible risk factors that can contribute to GI illness. Elucidating the causes of turbidity-GI illness associations including seasonal and regional variations would be necessary to further inform surveillance needs. PMID:25919375

  9. Drinking water turbidity and emergency department visits for gastrointestinal illness in New York City, 2002-2009.

    PubMed

    Hsieh, Jennifer L; Nguyen, Trang Quyen; Matte, Thomas; Ito, Kazuhiko

    2015-01-01

    Studies have examined whether there is a relationship between drinking water turbidity and gastrointestinal (GI) illness indicators, and results have varied possibly due to differences in methods and study settings. As part of a water security improvement project we conducted a retrospective analysis of the relationship between drinking water turbidity and GI illness in New York City (NYC) based on emergency department chief complaint syndromic data that are available in near-real-time. We used a Poisson time-series model to estimate the relationship of turbidity measured at distribution system and source water sites to diarrhea emergency department (ED) visits in NYC during 2002-2009. The analysis assessed age groups and was stratified by season and adjusted for sub-seasonal temporal trends, year-to-year variation, ambient temperature, day-of-week, and holidays. Seasonal variation unrelated to turbidity dominated (~90% deviance) the variation of daily diarrhea ED visits, with an additional 0.4% deviance explained with turbidity. Small yet significant multi-day lagged associations were found between NYC turbidity and diarrhea ED visits in the spring only, with approximately 5% excess risk per inter-quartile-range of NYC turbidity peaking at a 6 day lag. This association was strongest among those aged 0-4 years and was explained by the variation in source water turbidity. Integrated analysis of turbidity and syndromic surveillance data, as part of overall drinking water surveillance, may be useful for enhanced situational awareness of possible risk factors that can contribute to GI illness. Elucidating the causes of turbidity-GI illness associations including seasonal and regional variations would be necessary to further inform surveillance needs.

  10. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    PubMed

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.

  11. Statistical variation in progressive scrambling

    NASA Astrophysics Data System (ADS)

    Clark, Robert D.; Fox, Peter C.

    2004-07-01

    The two methods most often used to evaluate the robustness and predictivity of partial least squares (PLS) models are cross-validation and response randomization. Both methods may be overly optimistic for data sets that contain redundant observations, however. The kinds of perturbation analysis widely used for evaluating model stability in the context of ordinary least squares regression are only applicable when the descriptors are independent of each other and errors are independent and normally distributed; neither assumption holds for QSAR in general and for PLS in particular. Progressive scrambling is a novel, non-parametric approach to perturbing models in the response space in a way that does not disturb the underlying covariance structure of the data. Here, we introduce adjustments for two of the characteristic values produced by a progressive scrambling analysis - the deprecated predictivity (Q_s^{ast^2}) and standard error of prediction (SDEP s * ) - that correct for the effect of introduced perturbation. We also explore the statistical behavior of the adjusted values (Q_0^{ast^2} and SDEP 0 * ) and the sensitivity to perturbation (d q 2/d r yy ' 2). It is shown that the three statistics are all robust for stable PLS models, in terms of the stochastic component of their determination and of their variation due to sampling effects involved in training set selection.

  12. Spatial and temporal variation of water quality of a segment of Marikina River using multivariate statistical methods.

    PubMed

    Chounlamany, Vanseng; Tanchuling, Maria Antonia; Inoue, Takanobu

    2017-09-01

    Payatas landfill in Quezon City, Philippines, releases leachate to the Marikina River through a creek. Multivariate statistical techniques were applied to study temporal and spatial variations in water quality of a segment of the Marikina River. The data set included 12 physico-chemical parameters for five monitoring stations over a year. Cluster analysis grouped the monitoring stations into four clusters and identified January-May as dry season and June-September as wet season. Principal components analysis showed that three latent factors are responsible for the data set explaining 83% of its total variance. The chemical oxygen demand, biochemical oxygen demand, total dissolved solids, Cl - and PO 4 3- are influenced by anthropogenic impact/eutrophication pollution from point sources. Total suspended solids, turbidity and SO 4 2- are influenced by rain and soil erosion. The highest state of pollution is at the Payatas creek outfall from March to May, whereas at downstream stations it is in May. The current study indicates that the river monitoring requires only four stations, nine water quality parameters and testing over three specific months of the year. The findings of this study imply that Payatas landfill requires a proper leachate collection and treatment system to reduce its impact on the Marikina River.

  13. Brightness Variations in the Solar Atmosphere as Seen by SOHO

    NASA Astrophysics Data System (ADS)

    Brkovic, A.; Rüedi, I.; Solanki, S. K.; Huber, M. C. E.; Stenflo, J. O.; Stucki, K.; Harrison, R.; Fludra, A.

    We present preliminary results of a statistical analysis of the brightness variations of solar features at different levels in the solar atmosphere. We observed quiet Sun regions at disc centre using the Coronal Diagnostic Spectrometer (CDS) onboard the Solar and Heliospheric Observatory (SOHO). We find significant variability at all time scales in all parts of the quiet Sun, from darkest intranetwork to brightest network. Such variations are observed simultaneously in the chromospheric He I 584.33 Angstroms (2 \\cdot 10^4 K) line, the transition region O V 629.74 Angstroms (2.5 \\cdot 10^5 K) and coronal Mg IX 368.06 Angstroms (10^6 K) line. The relative variability is independent of brightness and most of the variability appears to take place on time scales longer than 5 minutes for all 3 spectral lines. No significant differences are observed between the different data sets.

  14. The first orbital solution for the massive colliding-wind binary HD 93162 (≡WR 25)

    NASA Astrophysics Data System (ADS)

    Gamen, R.; Gosset, E.; Morrell, N.; Niemela, V.; Sana, H.; Nazé, Y.; Rauw, G.; Barbá, R.; Solivella, G.

    2006-12-01

    Context: Since the discovery, with the EINSTEIN satellite, of strong X-ray emission associated with HD 93162 (≡WR 25), this object has been predicted to be a colliding-wind binary system. However, radial-velocity variations that would prove the suspected binary nature have yet to be found. Aims: We spectroscopically monitored this object to investigate its possible variability to address this discordance. Methods: We compiled the largest available radial-velocity data set for this star to look for variations that might be due to binary motion. We derived radial velocities from spectroscopic data acquired mainly between 1994 and 2006, and searched these radial velocities for periodicities using different numerical methods. Results: For the first time, periodic radial-velocity variations are detected. Our analysis definitively shows that the Wolf-Rayet star WR 25 is an eccentric binary system with a probable period of about 208 days.

  15. BROADBAND SPECTROSCOPY USING TWO SUZAKU OBSERVATIONS OF THE HMXB GX 301-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suchy, Slawomir; Markowitz, Alex; Rothschild, Richard E.

    2012-02-01

    We present the analysis of two Suzaku observations of GX 301-2 at two orbital phases after the periastron passage. Variations in the column density of the line-of-sight absorber are observed, consistent with accretion from a clumpy wind. In addition to a cyclotron resonance scattering feature (CRSF), multiple fluorescence emission lines were detected in both observations. The variations in the pulse profiles and the CRSF throughout the pulse phase have a signature of a magnetic dipole field. Using a simple dipole model we calculated the expected magnetic field values for different pulse phases and were able to extract a set ofmore » geometrical angles, loosely constraining the dipole geometry in the neutron star. From the variation of the CRSF width and energy, we found a geometrical solution for the dipole, making the inclination consistent with previously published values.« less

  16. Genome-wide association study identifies SNPs in the MHC class II loci that are associated with self-reported history of whooping cough

    PubMed Central

    McMahon, George; Ring, Susan M.; Davey-Smith, George; Timpson, Nicholas J.

    2015-01-01

    Whooping cough is currently seeing resurgence in countries despite high vaccine coverage. There is considerable variation in subject-specific response to infection and vaccine efficacy, but little is known about the role of human genetics. We carried out a case–control genome-wide association study of adult or parent-reported history of whooping cough in two cohorts from the UK: the ALSPAC cohort and the 1958 British Birth Cohort (815/758 cases and 6341/4308 controls, respectively). We also imputed HLA alleles using dense SNP data in the MHC region and carried out gene-based and gene-set tests of association and estimated the amount of additive genetic variation explained by common SNPs. We observed a novel association at SNPs in the MHC class II region in both cohorts [lead SNP rs9271768 after meta-analysis, odds ratio [95% confidence intervals (CIs)] 1.47 (1.35, 1.6), P-value 1.21E − 18]. Multiple strong associations were also observed at alleles at the HLA class II loci. The majority of these associations were explained by the lead SNP rs9271768. Gene-based and gene-set tests and estimates of explainable common genetic variation could not establish the presence of additional associations in our sample. Genetic variation at the MHC class II region plays a role in susceptibility to whooping cough. These findings provide additional perspective on mechanisms of whooping cough infection and vaccine efficacy. PMID:26231221

  17. Planning tiger recovery: Understanding intraspecific variation for effective conservation

    PubMed Central

    Wilting, Andreas; Courtiol, Alexandre; Christiansen, Per; Niedballa, Jürgen; Scharf, Anne K.; Orlando, Ludovic; Balkenhol, Niko; Hofer, Heribert; Kramer-Schadt, Stephanie; Fickel, Jörns; Kitchener, Andrew C.

    2015-01-01

    Although significantly more money is spent on the conservation of tigers than on any other threatened species, today only 3200 to 3600 tigers roam the forests of Asia, occupying only 7% of their historical range. Despite the global significance of and interest in tiger conservation, global approaches to plan tiger recovery are partly impeded by the lack of a consensus on the number of tiger subspecies or management units, because a comprehensive analysis of tiger variation is lacking. We analyzed variation among all nine putative tiger subspecies, using extensive data sets of several traits [morphological (craniodental and pelage), ecological, molecular]. Our analyses revealed little variation and large overlaps in each trait among putative subspecies, and molecular data showed extremely low diversity because of a severe Late Pleistocene population decline. Our results support recognition of only two subspecies: the Sunda tiger, Panthera tigris sondaica, and the continental tiger, Panthera tigris tigris, which consists of two (northern and southern) management units. Conservation management programs, such as captive breeding, reintroduction initiatives, or trans-boundary projects, rely on a durable, consistent characterization of subspecies as taxonomic units, defined by robust multiple lines of scientific evidence rather than single traits or ad hoc descriptions of one or few specimens. Our multiple-trait data set supports a fundamental rethinking of the conventional tiger taxonomy paradigm, which will have profound implications for the management of in situ and ex situ tiger populations and boost conservation efforts by facilitating a pragmatic approach to tiger conservation management worldwide. PMID:26601191

  18. Planning tiger recovery: Understanding intraspecific variation for effective conservation.

    PubMed

    Wilting, Andreas; Courtiol, Alexandre; Christiansen, Per; Niedballa, Jürgen; Scharf, Anne K; Orlando, Ludovic; Balkenhol, Niko; Hofer, Heribert; Kramer-Schadt, Stephanie; Fickel, Jörns; Kitchener, Andrew C

    2015-06-01

    Although significantly more money is spent on the conservation of tigers than on any other threatened species, today only 3200 to 3600 tigers roam the forests of Asia, occupying only 7% of their historical range. Despite the global significance of and interest in tiger conservation, global approaches to plan tiger recovery are partly impeded by the lack of a consensus on the number of tiger subspecies or management units, because a comprehensive analysis of tiger variation is lacking. We analyzed variation among all nine putative tiger subspecies, using extensive data sets of several traits [morphological (craniodental and pelage), ecological, molecular]. Our analyses revealed little variation and large overlaps in each trait among putative subspecies, and molecular data showed extremely low diversity because of a severe Late Pleistocene population decline. Our results support recognition of only two subspecies: the Sunda tiger, Panthera tigris sondaica, and the continental tiger, Panthera tigris tigris, which consists of two (northern and southern) management units. Conservation management programs, such as captive breeding, reintroduction initiatives, or trans-boundary projects, rely on a durable, consistent characterization of subspecies as taxonomic units, defined by robust multiple lines of scientific evidence rather than single traits or ad hoc descriptions of one or few specimens. Our multiple-trait data set supports a fundamental rethinking of the conventional tiger taxonomy paradigm, which will have profound implications for the management of in situ and ex situ tiger populations and boost conservation efforts by facilitating a pragmatic approach to tiger conservation management worldwide.

  19. Geostatistical analysis of centimeter-scale hydraulic conductivity variations at the MADE site

    NASA Astrophysics Data System (ADS)

    Bohling, Geoffrey C.; Liu, Gaisheng; Knobbe, Steven J.; Reboulet, Edward C.; Hyndman, David W.; Dietrich, Peter; Butler, James J., Jr.

    2012-02-01

    Spatial variations in hydraulic conductivity (K) provide critical controls on solute transport in the subsurface. Recently, new direct-push tools were developed for high-resolution characterization of K variations in unconsolidated settings. These tools were applied to obtain 58 profiles (vertical resolution of 1.5 cm) from the heavily studied macrodispersion experiment (MADE) site. We compare the data from these 58 profiles with those from the 67 flowmeter profiles that have served as the primary basis for characterizing the heterogeneous aquifer at the site. Overall, the patterns of variation displayed by the two data sets are quite similar, in terms of both large-scale structure and autocorrelation characteristics. The direct-push K values are, on average, roughly a factor of 5 lower than the flowmeter values. This discrepancy appears to be attributable, at least in part, to opposite biases between the two methods, with the current versions of the direct-push tools underestimating K in the highly permeable upper portions of the aquifer and the flowmeter overestimating K in the less permeable lower portions. The vertically averaged K values from a series of direct-push profiles in the vicinity of two pumping tests at the site are consistent with the K estimates from those tests, providing evidence that the direct-push estimates are of a reasonable magnitude. The results of this field demonstration show that direct-push profiling has the potential to characterize highly heterogeneous aquifers with a speed and resolution that has not previously been possible.

  20. “They Have to Adapt to Learn”: Surgeons’ Perspectives on the Role of Procedural Variation in Surgical Education

    PubMed Central

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2017-01-01

    OBJECTIVE Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons’ perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. DESIGN AND SETTING This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. PARTICIPANTS Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. RESULTS Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents’ efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. CONCLUSIONS Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons’ teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. PMID:26705062

  1. Evidence of linkage of HDL level variation to APOC3 in two samples with different ascertainment.

    PubMed

    Gagnon, France; Jarvik, Gail P; Motulsky, Arno G; Deeb, Samir S; Brunzell, John D; Wijsman, Ellen M

    2003-11-01

    The APOA1-C3-A4-A5 gene complex encodes genes whose products are implicated in the metabolism of HDL and/or triglycerides. Although the relationship between polymorphisms in this gene cluster and dyslipidemias was first reported more than 15 years ago, association and linkage results have remained inconclusive. This is due, in part, to the oligogenic and multivariate nature of dyslipidemic phenotypes. Therefore, we investigate evidence of linkage of APOC3 and HDL using two samples of dyslipidemic pedigrees: familial combined hyperlipidemia (FCHL) and isolated low-HDL (ILHDL). We used a strategy that deals with several difficulties inherent in the study of complex traits: by using a Bayesian Markov Chain Monte Carlo (MCMC) approach we allow for oligogenic trait models, as well as simultaneous incorporation of covariates, in the context of multipoint analysis. By using this approach on extended pedigrees we provide evidence of linkage of APOC3 and HDL level variation in two samples with different ascertainment. In addition to APOC3, we estimate that two to three genes, each with a substantial effect on total variance, are responsible for HDL variation in both data sets. We also provide evidence, using the FCHL data set, for a pleiotropic effect between HDL, HDL3 and triglycerides at the APOC3 locus.

  2. Life-History Patterns of Lizards of the World.

    PubMed

    Mesquita, Daniel O; Costa, Gabriel C; Colli, Guarino R; Costa, Taís B; Shepard, Donald B; Vitt, Laurie J; Pianka, Eric R

    2016-06-01

    Identification of mechanisms that promote variation in life-history traits is critical to understand the evolution of divergent reproductive strategies. Here we compiled a large life-history data set (674 lizard populations, representing 297 species from 263 sites globally) to test a number of hypotheses regarding the evolution of life-history traits in lizards. We found significant phylogenetic signal in most life-history traits, although phylogenetic signal was not particularly high. Climatic variables influenced the evolution of many traits, with clutch frequency being positively related to precipitation and clutches of tropical lizards being smaller than those of temperate species. This result supports the hypothesis that in tropical and less seasonal climates, many lizards tend to reproduce repeatedly throughout the season, producing smaller clutches during each reproductive episode. Our analysis also supported the hypothesis that viviparity has evolved in lizards as a response to cooler climates. Finally, we also found that variation in trait values explained by clade membership is unevenly distributed among lizard clades, with basal clades and a few younger clades showing the most variation. Our global analyses are largely consistent with life-history theory and previous results based on smaller and scattered data sets, suggesting that these patterns are remarkably consistent across geographic and taxonomic scales.

  3. Fourier spatial frequency analysis for image classification: training the training set

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart

    2016-04-01

    The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.

  4. Comparing safety climate for nurses working in operating theatres, critical care and ward areas in the UK: a mixed methods study

    PubMed Central

    Tarling, Maggie; Jones, Anne; Murrells, Trevor; McCutcheon, Helen

    2017-01-01

    Objectives The main aim of the study was to explore the potential sources of variation and understand the meaning of safety climate for nursing practice in acute hospital settings in the UK. Design A sequential mixed methods design included a cross-sectional survey using the Safety Climate Questionnaire (SCQ) and thematic analysis of focus group discussions. Confirmatory factor analysis (CFA) was used to validate the factor structure of the SCQ. Factor scores were compared between nurses working in operating theatres, critical care and ward areas. Results from the survey and the thematic analysis were then compared and synthesised. Setting A London University. Participants 319 registered nurses working in acute hospital settings completed the SCQ and a further 23 nurses participated in focus groups. Results CFA indicated that there was a good model fit on some criteria (χ2=1683.699, df=824, p<0.001; χ2/df=2.04; root mean square error of approximation=0.058) but a less acceptable fit on comparative fit index which is 0.804. There was a statistically significant difference between clinical specialisms in management commitment (F (4,266)=4.66, p=0.001). Nurses working in operating theatres had lower scores compared with ward areas and they also reported negative perceptions about management in their focus group. There was significant variation in scores for communication across clinical specialism (F (4,266)=2.62, p=0.035) but none of the pairwise comparisons achieved statistical significance. Thematic analysis identified themes of human factors, clinical management and protecting patients. The system and the human side of caring was identified as a meta-theme. Conclusions The results suggest that the SCQ has some utility but requires further exploration. The findings indicate that safety in nursing practice is a complex interaction between safety systems and the social and interpersonal aspects of clinical practice. PMID:29084793

  5. Global Precipitation Analyses (3-Hourly to Monthly) Using TRMM, SSM/I and other Satellite Information

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric

    2002-01-01

    Global precipitation analysis covering the last few decades and the impact of the new TRMM precipitation observations are discussed. The 20+ year, monthly, globally complete precipitation analysis of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP) is used to explore global and regional variations and trends and is compared to the much shorter TRMM(Tropical Rainfall Measuring Mission) tropical data set. The GPCP data set shows no significant trend in precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. Regional trends are also analyzed. A trend pattern that is a combination of both El Nino and La Nina precipitation features is evident in the 20-year data set. This pattern is related to an increase with time in the number of combined months of El Nino and La Nina during the 20 year period. Monthly anomalies of precipitation are related to ENS0 variations with clear signals extending into middle and high latitudes of both hemispheres. The GPCP daily, 1 deg. latitude-longitude analysis, which is available from January 1997 to the present is described and the evolution of precipitation patterns on this time scale related to El Nino and La Nina is discussed. Finally, a TRMM-based 3-hr analysis is described that uses TRMM to calibrate polar-orbit microwave observations from SSM/I and geosynchronous IR observations and merges the various calibrated observations into a final, 3-hr resolution map. This TRMM standard product will be available for the entire TRMM period (January 1998-present). A real-time version of this merged product is being produced and is available at 0.25 deg. latitude-longitude resolution over the latitude range from 5O deg. N-50 deg. S. Examples are shown, including its use in monitoring flood conditions.

  6. Lung sound analysis for wheeze episode detection.

    PubMed

    Jain, Abhishek; Vepa, Jithendra

    2008-01-01

    Listening and interpreting lung sounds by a stethoscope had been an important component of screening and diagnosing lung diseases. However this practice has always been vulnerable to poor audibility, inter-observer variations (between different physicians) and poor reproducibility. Thus computerized analysis of lung sounds for objective diagnosis of lung diseases is seen as a probable aid. In this paper we aim at automatic analysis of lung sounds for wheeze episode detection and quantification. The proposed algorithm integrates and analyses the set of parameters based on ATS (American Thoracic Society) definition of wheezes. It is very robust, computationally simple and yielded sensitivity of 84% and specificity of 86%.

  7. Preliminary Study on Appearance-Based Detection of Anatomical Point Landmarks in Body Trunk CT Images

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni

    Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.

  8. Assesment on the performance of electrode arrays using image processing technique

    NASA Astrophysics Data System (ADS)

    Usman, N.; Khiruddin, A.; Nawawi, Mohd

    2017-08-01

    Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trofimov, A; Carpenter, K; Shih, HA

    Purpose: To quantify daily set-up variations in fractionated proton therapy of ocular melanomas, and to assess the effect on the fidelity of delivered distribution to the plan. Methods: In a typical five-fraction course, daily set-up is achieved by matching the position of fiducial markers in orthogonal radiographs to the images generated by treatment planning program. A patient maintains the required gaze direction voluntarily, without the aid of fixation devices. Confirmation radiographs are acquired to assess intrafractional changes. For this study, daily radiographs were analyzed to determine the daily iso-center position and apparent gaze direction, which were then transferred to themore » planning system to calculate the dose delivered in individual fractions, and accumulated dose for the entire course. Dose-volume metrics were compared between the planned and accumulated distributions for the tumor and organs at risk, for representative cases that varied by location within the ocular globe. Results: The analysis of the first set of cases (3 posterior, 3 transequatorial and 4 anterior tumors) revealed varying dose deviation patterns, depending on the tumor location. For anterior and posterior tumors, the largest dose increases were observed in the lens and ciliary body, while for the equatorial tumors, macula, optic nerve and disk, were most often affected. The iso-center position error was below 1.3 mm (95%-confidence interval), and the standard deviation of daily polar and azimuthal gaze set-up were 1.5 and 3 degrees, respectively. Conclusion: We quantified interfractional and intrafractional set-up variation, and estimated their effect on the delivered dose for representative cases. Current safety margins are sufficient to maintain the target coverage, however, the dose delivered to critical structures often deviates from the plan. The ongoing analysis will further explore the patterns of dose deviation, and may help to identify particular treatment scenarios which are at a higher risk for such deviations.« less

  10. Applying Deep Learning in Medical Images: The Case of Bone Age Estimation.

    PubMed

    Lee, Jang Hyung; Kim, Kwang Gi

    2018-01-01

    A diagnostic need often arises to estimate bone age from X-ray images of the hand of a subject during the growth period. Together with measured physical height, such information may be used as indicators for the height growth prognosis of the subject. We present a way to apply the deep learning technique to medical image analysis using hand bone age estimation as an example. Age estimation was formulated as a regression problem with hand X-ray images as input and estimated age as output. A set of hand X-ray images was used to form a training set with which a regression model was trained. An image preprocessing procedure is described which reduces image variations across data instances that are unrelated to age-wise variation. The use of Caffe, a deep learning tool is demonstrated. A rather simple deep learning network was adopted and trained for tutorial purpose. A test set distinct from the training set was formed to assess the validity of the approach. The measured mean absolute difference value was 18.9 months, and the concordance correlation coefficient was 0.78. It is shown that the proposed deep learning-based neural network can be used to estimate a subject's age from hand X-ray images, which eliminates the need for tedious atlas look-ups in clinical environments and should improve the time and cost efficiency of the estimation process.

  11. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  12. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  13. Kernel analysis of partial least squares (PLS) regression models.

    PubMed

    Shinzawa, Hideyuki; Ritthiruangdej, Pitiporn; Ozaki, Yukihiro

    2011-05-01

    An analytical technique based on kernel matrix representation is demonstrated to provide further chemically meaningful insight into partial least squares (PLS) regression models. The kernel matrix condenses essential information about scores derived from PLS or principal component analysis (PCA). Thus, it becomes possible to establish the proper interpretation of the scores. A PLS model for the total nitrogen (TN) content in multiple Thai fish sauces is built with a set of near-infrared (NIR) transmittance spectra of the fish sauce samples. The kernel analysis of the scores effectively reveals that the variation of the spectral feature induced by the change in protein content is substantially associated with the total water content and the protein hydration. Kernel analysis is also carried out on a set of time-dependent infrared (IR) spectra representing transient evaporation of ethanol from a binary mixture solution of ethanol and oleic acid. A PLS model to predict the elapsed time is built with the IR spectra and the kernel matrix is derived from the scores. The detailed analysis of the kernel matrix provides penetrating insight into the interaction between the ethanol and the oleic acid.

  14. GoIFISH: a system for the quantification of single cell heterogeneity from IFISH images.

    PubMed

    Trinh, Anne; Rye, Inga H; Almendro, Vanessa; Helland, Aslaug; Russnes, Hege G; Markowetz, Florian

    2014-08-26

    Molecular analysis has revealed extensive intra-tumor heterogeneity in human cancer samples, but cannot identify cell-to-cell variations within the tissue microenvironment. In contrast, in situ analysis can identify genetic aberrations in phenotypically defined cell subpopulations while preserving tissue-context specificity. GoIFISHGoIFISH is a widely applicable, user-friendly system tailored for the objective and semi-automated visualization, detection and quantification of genomic alterations and protein expression obtained from fluorescence in situ analysis. In a sample set of HER2-positive breast cancers GoIFISHGoIFISH is highly robust in visual analysis and its accuracy compares favorably to other leading image analysis methods. GoIFISHGoIFISH is freely available at www.sourceforge.net/projects/goifish/.

  15. Changes in US extreme sea levels and the role of large scale climate variations

    NASA Astrophysics Data System (ADS)

    Wahl, T.; Chambers, D. P.

    2015-12-01

    We analyze a set of 20 tide gauge records covering the contiguous United States (US) coastline and the period from 1929 to 2013 to identify long-term trends and multi-decadal variations in extreme sea levels (ESLs) relative to changes in mean sea level (MSL). Significant but small long-term trends in ESLs above/below MSL are found at individual sites along most coastline stretches, but are mostly confined to the southeast coast and the winter season when storm surges are primarily driven by extra-tropical cyclones. We identify six regions with broadly coherent and considerable multi-decadal ESL variations unrelated to MSL changes. Using a quasi-non-stationary extreme value analysis approach we show that the latter would have caused variations in design relevant return water levels (RWLs; 50 to 200 year return periods) ranging from ~10 cm to as much as 110 cm across the six regions. To explore the origin of these temporal changes and the role of large-scale climate variability we develop different sets of simple and multiple linear regression models with RWLs as dependent variables and climate indices, or tailored (toward the goal of predicting multi-decadal RWL changes) versions of them, and wind stress curl as independent predictors. The models, after being tested for spatial and temporal stability, explain up to 97% of the observed variability at individual sites and almost 80% on average. Using the model predictions as covariates for the quasi-non-stationary extreme value analysis also significantly reduces the range of change in the 100-year RWLs over time, turning a non-stationary process into a stationary one. This highlights that the models - when used with regional and global climate model output of the predictors - should also be capable of projecting future RWL changes to be used by decision makers for improved flood preparedness and long-term resiliency.

  16. Spatial variations of sea level along the coast of Thailand: Impacts of extreme land subsidence, earthquakes and the seasonal monsoon

    NASA Astrophysics Data System (ADS)

    Saramul, Suriyan; Ezer, Tal

    2014-11-01

    The study addresses two important issues associated with sea level along the coasts of Thailand: first, the fast sea level rise and its spatial variation, and second, the monsoonal-driven seasonal variations in sea level. Tide gauge data that are more extensive than in past studies were obtained from several different local and global sources, and relative sea level rise (RSLR) rates were obtained from two different methods, linear regressions and non-linear Empirical Mode Decomposition/Hilbert-Huang Transform (EMD/HHT) analysis. The results show extremely large spatial variations in RSLR, with rates varying from ~ 1 mm y-1 to ~ 20 mm y-1; the maximum RSLR is found in the upper Gulf of Thailand (GOT) near Bangkok, where local land subsidence due to groundwater extraction dominates the trend. Furthermore, there are indications that RSLR rates increased significantly in all locations after the 2004 Sumatra-Andaman Earthquake and the Indian Ocean tsunami that followed, so that recent RSLR rates seem to have less spatial differences than in the past, but with high rates of ~ 20-30 mm y-1 almost everywhere. The seasonal sea level cycle was found to be very different between stations in the GOT, which have minimum sea level in June-July, and stations in the Andaman Sea, which have minimum sea level in February. The seasonal sea-level variations in the GOT are driven mostly by large-scale wind-driven set-up/set-down processes associated with the seasonal monsoon and have amplitudes about ten times larger than either typical steric changes at those latitudes or astronomical annual tides.

  17. Deep epistasis in human metabolism

    NASA Astrophysics Data System (ADS)

    Imielinski, Marcin; Belta, Calin

    2010-06-01

    We extend and apply a method that we have developed for deriving high-order epistatic relationships in large biochemical networks to a published genome-scale model of human metabolism. In our analysis we compute 33 328 reaction sets whose knockout synergistically disables one or more of 43 important metabolic functions. We also design minimal knockouts that remove flux through fumarase, an enzyme that has previously been shown to play an important role in human cancer. Most of these knockout sets employ more than eight mutually buffering reactions, spanning multiple cellular compartments and metabolic subsystems. These reaction sets suggest that human metabolic pathways possess a striking degree of parallelism, inducing "deep" epistasis between diversely annotated genes. Our results prompt specific chemical and genetic perturbation follow-up experiments that could be used to query in vivo pathway redundancy. They also suggest directions for future statistical studies of epistasis in genetic variation data sets.

  18. Head-and-face shape variations of U.S. civilian workers

    PubMed Central

    Zhuang, Ziqing; Shu, Chang; Xi, Pengcheng; Bergman, Michael; Joseph, Michael

    2016-01-01

    The objective of this study was to quantify head-and-face shape variations of U.S. civilian workers using modern methods of shape analysis. The purpose of this study was based on previously highlighted changes in U.S. civilian worker head-and-face shape over the last few decades – touting the need for new and better fitting respirators – as well as the study's usefulness in designing more effective personal protective equipment (PPE) – specifically in the field of respirator design. The raw scan three-dimensional (3D) data for 1169 subjects were parameterized using geometry processing techniques. This process allowed the individual scans to be put in correspondence with each other in such a way that statistical shape analysis could be performed on a dense set of 3D points. This process also cleaned up the original scan data such that the noise was reduced and holes were filled in. The next step, statistical analysis of the variability of the head-and-face shape in the 3D database, was conducted using Principal Component Analysis (PCA) techniques. Through these analyses, it was shown that the space of the head-and-face shape was spanned by a small number of basis vectors. Less than 50 components explained more than 90% of the variability. Furthermore, the main mode of variations could be visualized through animating the shape changes along the PCA axes with computer software in executable form for Windows XP. The results from this study in turn could feed back into respirator design to achieve safer, more efficient product style and sizing. Future study is needed to determine the overall utility of the point cloud-based approach for the quantification of facial morphology variation and its relationship to respirator performance. PMID:23399025

  19. Head-and-face shape variations of U.S. civilian workers.

    PubMed

    Zhuang, Ziqing; Shu, Chang; Xi, Pengcheng; Bergman, Michael; Joseph, Michael

    2013-09-01

    The objective of this study was to quantify head-and-face shape variations of U.S. civilian workers using modern methods of shape analysis. The purpose of this study was based on previously highlighted changes in U.S. civilian worker head-and-face shape over the last few decades - touting the need for new and better fitting respirators - as well as the study's usefulness in designing more effective personal protective equipment (PPE) - specifically in the field of respirator design. The raw scan three-dimensional (3D) data for 1169 subjects were parameterized using geometry processing techniques. This process allowed the individual scans to be put in correspondence with each other in such a way that statistical shape analysis could be performed on a dense set of 3D points. This process also cleaned up the original scan data such that the noise was reduced and holes were filled in. The next step, statistical analysis of the variability of the head-and-face shape in the 3D database, was conducted using Principal Component Analysis (PCA) techniques. Through these analyses, it was shown that the space of the head-and-face shape was spanned by a small number of basis vectors. Less than 50 components explained more than 90% of the variability. Furthermore, the main mode of variations could be visualized through animating the shape changes along the PCA axes with computer software in executable form for Windows XP. The results from this study in turn could feed back into respirator design to achieve safer, more efficient product style and sizing. Future study is needed to determine the overall utility of the point cloud-based approach for the quantification of facial morphology variation and its relationship to respirator performance. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. AGAPE (Automated Genome Analysis PipelinE) for Pan-Genome Analysis of Saccharomyces cerevisiae

    PubMed Central

    Song, Giltae; Dickins, Benjamin J. A.; Demeter, Janos; Engel, Stacia; Dunn, Barbara; Cherry, J. Michael

    2015-01-01

    The characterization and public release of genome sequences from thousands of organisms is expanding the scope for genetic variation studies. However, understanding the phenotypic consequences of genetic variation remains a challenge in eukaryotes due to the complexity of the genotype-phenotype map. One approach to this is the intensive study of model systems for which diverse sources of information can be accumulated and integrated. Saccharomyces cerevisiae is an extensively studied model organism, with well-known protein functions and thoroughly curated phenotype data. To develop and expand the available resources linking genomic variation with function in yeast, we aim to model the pan-genome of S. cerevisiae. To initiate the yeast pan-genome, we newly sequenced or re-sequenced the genomes of 25 strains that are commonly used in the yeast research community using advanced sequencing technology at high quality. We also developed a pipeline for automated pan-genome analysis, which integrates the steps of assembly, annotation, and variation calling. To assign strain-specific functional annotations, we identified genes that were not present in the reference genome. We classified these according to their presence or absence across strains and characterized each group of genes with known functional and phenotypic features. The functional roles of novel genes not found in the reference genome and associated with strains or groups of strains appear to be consistent with anticipated adaptations in specific lineages. As more S. cerevisiae strain genomes are released, our analysis can be used to collate genome data and relate it to lineage-specific patterns of genome evolution. Our new tool set will enhance our understanding of genomic and functional evolution in S. cerevisiae, and will be available to the yeast genetics and molecular biology community. PMID:25781462

  1. Signal processing for the detection of explosive residues on varying substrates using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Morton, Kenneth D., Jr.; Torrione, Peter A.; Collins, Leslie

    2011-05-01

    Laser induced breakdown spectroscopy (LIBS) can provide rapid, minimally destructive, chemical analysis of substances with the benefit of little to no sample preparation. Therefore, LIBS is a viable technology for the detection of substances of interest in near real-time fielded remote sensing scenarios. Of particular interest to military and security operations is the detection of explosive residues on various surfaces. It has been demonstrated that LIBS is capable of detecting such residues, however, the surface or substrate on which the residue is present can alter the observed spectra. Standard chemometric techniques such as principal components analysis and partial least squares discriminant analysis have previously been applied to explosive residue detection, however, the classification techniques developed on such data perform best against residue/substrate pairs that were included in model training but do not perform well when the residue/substrate pairs are not in the training set. Specifically residues in the training set may not be correctly detected if they are presented on a previously unseen substrate. In this work, we explicitly model LIBS spectra resulting from the residue and substrate to attempt to separate the response from each of the two components. This separation process is performed jointly with classifier design to ensure that the classifier that is developed is able to detect residues of interest without being confused by variations in the substrates. We demonstrate that the proposed classification algorithm provides improved robustness to variations in substrate compared to standard chemometric techniques for residue detection.

  2. Variation objective analyses for cyclone studies

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Kidder, S. Q.; Ochs, H. T.

    1985-01-01

    The objectives were to: (1) develop an objective analysis technique that will maximize the information content of data available from diverse sources, with particular emphasis on the incorporation of observations from satellites with those from more traditional immersion techniques; and (2) to develop a diagnosis of the state of the synoptic scale atmosphere on a much finer scale over a much broader region than is presently possible to permit studies of the interactions and energy transfers between global, synoptic and regional scale atmospheric processes. The variational objective analysis model consists of the two horizontal momentum equations, the hydrostatic equation, and the integrated continuity equation for a dry hydrostatic atmosphere. Preliminary tests of the model with the SESMAE I data set are underway for 12 GMT 10 April 1979. At this stage of purpose of the analysis is not the diagnosis of atmospheric structures but rather the validation of the model. Model runs for rawinsonde data and with the precision modulus weights set to force most of the adjustment of the wind field to the mass field have produced 90 to 95 percent reductions in the imbalance of the initial data after only 4-cycles through the Euler-Lagrange equations. Sensitivity tests for linear stability of the 11 Euler-Lagrange equations that make up the VASP Model 1 indicate that there will be a lower limit to the scales of motion that can be resolved by this method. Linear stability criteria are violated where there is large horizontal wind shear near the upper tropospheric jet.

  3. Costing Alternative Birth Settings for Women at Low Risk of Complications: A Systematic Review.

    PubMed

    Scarf, Vanessa; Catling, Christine; Viney, Rosalie; Homer, Caroline

    2016-01-01

    There is demand from women for alternatives to giving birth in a standard hospital setting however access to these services is limited. This systematic review examines the literature relating to the economic evaluations of birth setting for women at low risk of complications. Searches of the literature to identify economic evaluations of different birth settings of the following electronic databases: MEDLINE, CINAHL, EconLit, Business Source Complete and Maternity and Infant care. Relevant English language publications were chosen using keywords and MeSH terms between 1995 and 2015. Inclusion criteria included studies focussing on the comparison of birth setting. Data were extracted with respect to study design, perspective, PICO principles, and resource use and cost data. Eleven studies were included from Australia, Canada, the Netherlands, Norway, the USA, and the UK. Four studies compared costs between homebirth and the hospital setting and the remaining seven focussed on the cost of birth centre care and the hospital setting. Six studies used a cost-effectiveness analysis and the remaining five studies used cost analysis and cost comparison methods. Eight of the 11 studies found a cost saving in the alternative settings. Two found no difference in the cost of the alternative settings and one found an increase in birth centre care. There are few studies that compare the cost of birth setting. The variation in the results may be attributable to the cost data collection processes, difference in health systems and differences in which costs were included. A better understanding of the cost of birth setting is needed to inform policy makers and service providers.

  4. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification

    PubMed Central

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-01-01

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination. PMID:24957028

  5. Short-Term Intra-Subject Variation in Exhaled Volatile Organic Compounds (VOCs) in COPD Patients and Healthy Controls and Its Effect on Disease Classification.

    PubMed

    Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir

    2014-05-09

    Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination.

  6. Copy number variation signature to predict human ancestry

    PubMed Central

    2012-01-01

    Background Copy number variations (CNVs) are genomic structural variants that are found in healthy populations and have been observed to be associated with disease susceptibility. Existing methods for CNV detection are often performed on a sample-by-sample basis, which is not ideal for large datasets where common CNVs must be estimated by comparing the frequency of CNVs in the individual samples. Here we describe a simple and novel approach to locate genome-wide CNVs common to a specific population, using human ancestry as the phenotype. Results We utilized our previously published Genome Alteration Detection Analysis (GADA) algorithm to identify common ancestry CNVs (caCNVs) and built a caCNV model to predict population structure. We identified a 73 caCNV signature using a training set of 225 healthy individuals from European, Asian, and African ancestry. The signature was validated on an independent test set of 300 individuals with similar ancestral background. The error rate in predicting ancestry in this test set was 2% using the 73 caCNV signature. Among the caCNVs identified, several were previously confirmed experimentally to vary by ancestry. Our signature also contains a caCNV region with a single microRNA (MIR270), which represents the first reported variation of microRNA by ancestry. Conclusions We developed a new methodology to identify common CNVs and demonstrated its performance by building a caCNV signature to predict human ancestry with high accuracy. The utility of our approach could be extended to large case–control studies to identify CNV signatures for other phenotypes such as disease susceptibility and drug response. PMID:23270563

  7. Does the Genetic Code Have A Eukaryotic Origin?

    PubMed Central

    Zhang, Zhang; Yu, Jun

    2013-01-01

    In the RNA world, RNA is assumed to be the dominant macromolecule performing most, if not all, core “house-keeping” functions. The ribo-cell hypothesis suggests that the genetic code and the translation machinery may both be born of the RNA world, and the introduction of DNA to ribo-cells may take over the informational role of RNA gradually, such as a mature set of genetic code and mechanism enabling stable inheritance of sequence and its variation. In this context, we modeled the genetic code in two content variables—GC and purine contents—of protein-coding sequences and measured the purine content sensitivities for each codon when the sensitivity (% usage) is plotted as a function of GC content variation. The analysis leads to a new pattern—the symmetric pattern—where the sensitivity of purine content variation shows diagonally symmetry in the codon table more significantly in the two GC content invariable quarters in addition to the two existing patterns where the table is divided into either four GC content sensitivity quarters or two amino acid diversity halves. The most insensitive codon sets are GUN (valine) and CAN (CAR for asparagine and CAY for aspartic acid) and the most biased amino acid is valine (always over-estimated) followed by alanine (always under-estimated). The unique position of valine and its codons suggests its key roles in the final recruitment of the complete codon set of the canonical table. The distinct choice may only be attributable to sequence signatures or signals of splice sites for spliceosomal introns shared by all extant eukaryotes. PMID:23402863

  8. Genomic analysis reveals major determinants of cis-regulatory variation in Capsella grandiflora

    PubMed Central

    Steige, Kim A.; Laenen, Benjamin; Reimegård, Johan; Slotte, Tanja

    2017-01-01

    Understanding the causes of cis-regulatory variation is a long-standing aim in evolutionary biology. Although cis-regulatory variation has long been considered important for adaptation, we still have a limited understanding of the selective importance and genomic determinants of standing cis-regulatory variation. To address these questions, we studied the prevalence, genomic determinants, and selective forces shaping cis-regulatory variation in the outcrossing plant Capsella grandiflora. We first identified a set of 1,010 genes with common cis-regulatory variation using analyses of allele-specific expression (ASE). Population genomic analyses of whole-genome sequences from 32 individuals showed that genes with common cis-regulatory variation (i) are under weaker purifying selection and (ii) undergo less frequent positive selection than other genes. We further identified genomic determinants of cis-regulatory variation. Gene body methylation (gbM) was a major factor constraining cis-regulatory variation, whereas presence of nearby transposable elements (TEs) and tissue specificity of expression increased the odds of ASE. Our results suggest that most common cis-regulatory variation in C. grandiflora is under weak purifying selection, and that gene-specific functional constraints are more important for the maintenance of cis-regulatory variation than genome-scale variation in the intensity of selection. Our results agree with previous findings that suggest TE silencing affects nearby gene expression, and provide evidence for a link between gbM and cis-regulatory constraint, possibly reflecting greater dosage sensitivity of body-methylated genes. Given the extensive conservation of gbM in flowering plants, this suggests that gbM could be an important predictor of cis-regulatory variation in a wide range of plant species. PMID:28096395

  9. Interventions to improve water quality for preventing diarrhoea: systematic review and meta-analysis

    PubMed Central

    Schmidt, Wolf-Peter; Rabie, Tamer; Roberts, Ian; Cairncross, Sandy

    2007-01-01

    Objective To assess the effectiveness of interventions to improve the microbial quality of drinking water for preventing diarrhoea. Design Systematic review. Data sources Cochrane Infectious Diseases Group's trials register, CENTRAL, Medline, Embase, LILACS; hand searching; and correspondence with experts and relevant organisations. Study selection Randomised and quasirandomised controlled trials of interventions to improve the microbial quality of drinking water for preventing diarrhoea in adults and in children in settings with endemic disease. Data extraction Allocation concealment, blinding, losses to follow-up, type of intervention, outcome measures, and measures of effect. Pooled effect estimates were calculated within the appropriate subgroups. Data synthesis 33 reports from 21 countries documenting 42 comparisons were included. Variations in design, setting, and type and point of intervention, and variations in defining, assessing, calculating, and reporting outcomes limited the comparability of study results and pooling of results by meta-analysis. In general, interventions to improve the microbial quality of drinking water are effective in preventing diarrhoea. Effectiveness was not conditioned on the presence of improved water supplies or sanitation in the study setting and was not enhanced by combining the intervention with instructions on basic hygiene, a water storage vessel, or improved sanitation or water supplies—other common environmental interventions intended to prevent diarrhoea. Conclusion Interventions to improve water quality are generally effective for preventing diarrhoea in all ages and in under 5s. Significant heterogeneity among the trials suggests that the level of effectiveness may depend on a variety of conditions that research to date cannot fully explain. PMID:17353208

  10. A comparison of a two-dimensional variational analysis method and a median filter for NSCAT ambiguity removal

    NASA Astrophysics Data System (ADS)

    Henderson, J. M.; Hoffman, R. N.; Leidner, S. M.; Atlas, R.; Brin, E.; Ardizzone, J. V.

    2003-06-01

    The ocean surface vector wind can be measured from space by scatterometers. For a set of measurements observed from several viewing directions and collocated in space and time, there will usually exist two, three, or four consistent wind vectors. These multiple wind solutions are known as ambiguities. Ambiguity removal procedures select one ambiguity at each location. We compare results of two different ambiguity removal algorithms, the operational median filter (MF) used by the Jet Propulsion Laboratory (JPL) and a two-dimensional variational analysis method (2d-VAR). We applied 2d-VAR to the entire NASA Scatterometer (NSCAT) mission, orbit by orbit, using European Centre for Medium-Range Weather Forecasts (ECMWF) 10-m wind analyses as background fields. We also applied 2d-VAR to a 51-day subset of the NSCAT mission using National Centers for Environmental Prediction (NCEP) 1000-hPa wind analyses as background fields. This second data set uses the same background fields as the MF data set. When both methods use the same NCEP background fields as a starting point for ambiguity removal, agreement is very good: Approximately only 3% of the wind vector cells (WVCs) have different ambiguity selections; however, most of the WVCs with changes occur in coherent patches. Since at least one of the selections is in error, this implies that errors due to ambiguity selection are not isolated, but are horizontally correlated. When we examine ambiguity selection differences at synoptic scales, we often find that the 2d-VAR selections are more meteorologically reasonable and more consistent with cloud imagery.

  11. Defining micro-epidemiology for malaria elimination: systematic review and meta-analysis.

    PubMed

    Bannister-Tyrrell, Melanie; Verdonck, Kristien; Hausmann-Muela, Susanna; Gryseels, Charlotte; Muela Ribera, Joan; Peeters Grietens, Koen

    2017-04-20

    Malaria risk can vary markedly between households in the same village, or between villages, but the determinants of this "micro-epidemiological" variation in malaria risk remain poorly understood. This study aimed to identify factors that explain fine-scale variation in malaria risk across settings and improve definitions and methods for malaria micro-epidemiology. A systematic review of studies that examined risk factors for variation in malaria infection between individuals, households, clusters, hotspots, or villages in any malaria-endemic setting was conducted. Four databases were searched for studies published up until 6th October 2015. Crude and adjusted effect estimates for risk factors for malaria infection were combined in random effects meta-analyses. Bias was assessed using the Newcastle-Ottawa Quality Assessment Scale. From 743 retrieved records, 51 studies were selected, representing populations comprising over 160,000 individuals in 21 countries, in high- and low-endemicity settings. Sixty-five risk factors were identified and meta-analyses were conducted for 11 risk factors. Most studies focused on environmental factors, especially increasing distance from a breeding site (OR 0.89, 95% CI 0.86-0.92, 10 studies). Individual bed net use was protective (OR 0.63, 95% CI 0.52-0.77, 12 studies), but not household bed net ownership. Increasing household size (OR 1.08, 95% CI 1.01-1.15, 4 studies) and household crowding (OR 1.79, 95% CI 1.48-2.16, 4 studies) were associated with malaria infection. Health seeking behaviour, medical history and genetic traits were less frequently studied. Only six studies examined whether individual-level risk factors explained differences in malaria risk at village or hotspot level, and five studies reported different risk factors at different levels of analysis. The risk of bias varied from low to high in individual studies. Insufficient reporting and comparability of measurements limited the number of meta-analyses conducted. Several variables associated with individual-level malaria infection were identified, but there was limited evidence that these factors explain variation in malaria risk at village or hotspot level. Social, population and other factors may confound estimates of environmental risk factors, yet these variables are not included in many studies. A structured framework of malaria risk factors is proposed to improve study design and quality of evidence in future micro-epidemiological studies.

  12. Trophic disruption: a meta-analysis of how habitat fragmentation affects resource consumption in terrestrial arthropod systems.

    PubMed

    Martinson, Holly M; Fagan, William F

    2014-09-01

    Habitat fragmentation is a complex process that affects ecological systems in diverse ways, altering everything from population persistence to ecosystem function. Despite widespread recognition that habitat fragmentation can influence food web interactions, consensus on the factors underlying variation in the impacts of fragmentation across systems remains elusive. In this study, we conduct a systematic review and meta-analysis to quantify the effects of habitat fragmentation and spatial habitat structure on resource consumption in terrestrial arthropod food webs. Across 419 studies, we found a negative overall effect of fragmentation on resource consumption. Variation in effect size was extensive but predictable. Specifically, resource consumption was reduced on small, isolated habitat fragments, higher at patch edges, and neutral with respect to landscape-scale spatial variables. In general, resource consumption increased in fragmented settings for habitat generalist consumers but decreased for specialist consumers. Our study demonstrates widespread disruption of trophic interactions in fragmented habitats and describes variation among studies that is largely predictable based on the ecological traits of the interacting species. We highlight future prospects for understanding how changes in spatial habitat structure may influence trophic modules and food webs. © 2014 John Wiley & Sons Ltd/CNRS.

  13. Quality and rigor of the concept mapping methodology: a pooled study analysis.

    PubMed

    Rosas, Scott R; Kane, Mary

    2012-05-01

    The use of concept mapping in research and evaluation has expanded dramatically over the past 20 years. Researchers in academic, organizational, and community-based settings have applied concept mapping successfully without the benefit of systematic analyses across studies to identify the features of a methodologically sound study. Quantitative characteristics and estimates of quality and rigor that may guide for future studies are lacking. To address this gap, we conducted a pooled analysis of 69 concept mapping studies to describe characteristics across study phases, generate specific indicators of validity and reliability, and examine the relationship between select study characteristics and quality indicators. Individual study characteristics and estimates were pooled and quantitatively summarized, describing the distribution, variation and parameters for each. In addition, variation in the concept mapping data collection in relation to characteristics and estimates was examined. Overall, results suggest concept mapping yields strong internal representational validity and very strong sorting and rating reliability estimates. Validity and reliability were consistently high despite variation in participation and task completion percentages across data collection modes. The implications of these findings as a practical reference to assess the quality and rigor for future concept mapping studies are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  15. A Perfect Match Genomic Landscape Provides a Unified Framework for the Precise Detection of Variation in Natural and Synthetic Haploid Genomes

    PubMed Central

    Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo

    2018-01-01

    We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. PMID:29367403

  16. Geographic variation in the black bear (Ursus americanus) in the eastern United States and Canada

    USGS Publications Warehouse

    Kennedy, M.L.; Kennedy, P.K.; Bogan, M.A.; Waits, J.L.

    2002-01-01

    The pattern of geographic variation in morphologic characters of the black bear (Ursus americanus) was assessed at 13 sites in the eastern United States and Canada. Thirty measurements from 206 males and 207 females were recorded to the nearest 0.01 mm using digital calipers and subjected to principal components analysis. A matrix of correlations among skull characters was computed, and the first 3 principal components were extracted. These accounted for 90.5% of the variation in the character set for males and 87.1% for females. Three-dimensional projection of localities onto principal components showed that, for males and females, largest individuals occurred in the more southern localities (e.g., males--Louisiana-Mississippi, eastern Texas; females--Louisiana-eastern Texas) and the smallest animals occurred in the northernmost locality (Quebec). Generally, bears were similar morphologically to those in nearby geographic areas. For males, correlations between morphologic variation and environmental factors indicated a significant relationship between size variation and mean January temperature, mean July temperature, mean annual precipitation, latitude, and actual evapotranspiration; for females, a significant relationship was observed between morphologic variation and mean annual temperature, mean January temperature, mean July temperature, latitude, and actual evapotranspiration. There was no significant correlation for either sex between environmental factors and projections onto components II and III.

  17. Evidence of Dynamic Crustal Deformation in Tohoku, Japan, From Time-Varying Receiver Functions

    NASA Astrophysics Data System (ADS)

    Porritt, R. W.; Yoshioka, S.

    2017-10-01

    Temporal variation of crustal structure is key to our understanding of Earth processes on human timescales. Often, we expect that the most significant structural variations are caused by strong ground shaking associated with large earthquakes, and recent studies seem to confirm this. Here we test the possibility of using P receiver functions (PRF) to isolate structural variations over time. Synthetic receiver function tests indicate that structural variation could produce PRF changes on the same order of magnitude as random noise or contamination by local earthquakes. Nonetheless, we find significant variability in observed receiver functions over time at several stations located in northeastern Honshu. Immediately following the Tohoku-oki earthquake, we observe high PRF variation clustering spatially, especially in two regions near the beginning and end of the rupture plane. Due to the depth sensitivity of PRF and the timescales over which this variability is observed, we infer this effect is primarily due to fluid migration in volcanic regions and shear stress/strength reorganization. While the noise levels in PRF are high for this type of analysis, by sampling small data sets, the computational cost is lower than other methods, such as ambient noise, thereby making PRF a useful tool for estimating temporal variations in crustal structure.

  18. Neural network post-processing of grayscale optical correlator

    NASA Technical Reports Server (NTRS)

    Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.

    2005-01-01

    In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.

  19. Using the range to calculate the coefficient of variation.

    PubMed

    Rhiel, G Steven

    2004-12-01

    In this research a coefficient of variation (CVhigh-low) is calculated from the highest and lowest values in a set of data. Use of CVhigh-low when the population is normal, leptokurtic, and skewed is discussed. The statistic is the most effective when sampling from the normal distribution. With the leptokurtic distributions, CVhigh-low works well for comparing the relative variability between two or more distributions but does not provide a very "good" point estimate of the population coefficient of variation. With skewed distributions CVhigh-low works well in identifying which data set has the more relative variation but does not specify how much difference there is in the variation. It also does not provide a "good" point estimate.

  20. Partial sequencing of sodA gene and its application to identification of Streptococcus dysgalactiae subsp. dysgalactiae isolated from farmed fish.

    PubMed

    Nomoto, R; Kagawa, H; Yoshida, T

    2008-01-01

    To investigate the difference between Lancefield group C Streptococcus dysgalactiae (GCSD) strains isolated from diseased fish and animals by sequencing and phylogenetic analysis of the sodA gene. The sodA gene of Strep. dysgalactiae strains isolated from fish and animals were amplified and its nucleotide sequences were determined. Although 100% sequence identity was observed among fish GCSD strains, the determined sequences from animal isolates showed variations against fish isolate sequences. Thus, all fish GCSD strains were clearly separated from the GCSD strains of other origin by using phylogenetic tree analysis. In addition, the original primer set was designed based on the determined sequences for specifically amplify the sodA gene of fish GCSD strains. The primer set yield amplification products from only fish GCSD strains. By sequencing analysis of the sodA gene, the genetic divergence between Strep. dysgalactiae strains isolated from fish and mammals was demonstrated. Moreover, an original oligonucletide primer set, which could simply detect the genotype of fish GCSD strains was designed. This study shows that Strep. dysgalactiae isolated from diseased fish could be distinguished from conventional GCSD strains by the difference in the sequence of the sodA gene.

  1. Numerical analysis of the dynamic interaction between wheel set and turnout crossing using the explicit finite element method

    NASA Astrophysics Data System (ADS)

    Xin, L.; Markine, V. L.; Shevtsov, I. Y.

    2016-03-01

    A three-dimensional (3-D) explicit dynamic finite element (FE) model is developed to simulate the impact of the wheel on the crossing nose. The model consists of a wheel set moving over the turnout crossing. Realistic wheel, wing rail and crossing geometries have been used in the model. Using this model the dynamic responses of the system such as the contact forces between the wheel and the crossing, crossing nose displacements and accelerations, stresses in rail material as well as in sleepers and ballast can be obtained. Detailed analysis of the wheel set and crossing interaction using the local contact stress state in the rail is possible as well, which provides a good basis for prediction of the long-term behaviour of the crossing (fatigue analysis). In order to tune and validate the FE model field measurements conducted on several turnouts in the railway network in the Netherlands are used here. The parametric study including variations of the crossing nose geometries performed here demonstrates the capabilities of the developed model. The results of the validation and parametric study are presented and discussed.

  2. Evidence for multidecadal variability in US extreme sea level records

    NASA Astrophysics Data System (ADS)

    Wahl, Thomas; Chambers, Don P.

    2015-03-01

    We analyze a set of 20 tide gauge records covering the contiguous United States (US) coastline and the period from 1929 to 2013 to identify long-term trends and multidecadal variations in extreme sea levels (ESLs) relative to changes in mean sea level (MSL). Different data sampling and analysis techniques are applied to test the robustness of the results against the selected methodology. Significant but small long-term trends in ESLs above/below MSL are found at individual sites along most coastline stretches, but are mostly confined to the southeast coast and the winter season when storm surges are primarily driven by extratropical cyclones. We identify six regions with broadly coherent and considerable multidecadal ESL variations unrelated to MSL changes. Using a quasi-nonstationary extreme value analysis, we show that the latter would have caused variations in design relevant return water levels (50-200 year return periods) ranging from ˜10 cm to as much as 110 cm across the six regions. The results raise questions as to the applicability of the "MSL offset method," assuming that ESL changes are primarily driven by changes in MSL without allowing for distinct long-term trends or low-frequency variations. Identifying the coherent multidecadal ESL variability is crucial in order to understand the physical driving factors. Ultimately, this information must be included into coastal design and adaptation processes.

  3. Measurements of wind-waves under transient wind conditions.

    NASA Astrophysics Data System (ADS)

    Shemer, Lev; Zavadsky, Andrey

    2015-11-01

    Wind forcing in nature is always unsteady, resulting in a complicated evolution pattern that involves numerous time and space scales. In the present work, wind waves in a laboratory wind-wave flume are studied under unsteady forcing`. The variation of the surface elevation is measured by capacitance wave gauges, while the components of the instantaneous surface slope in across-wind and along-wind directions are determined by a regular or scanning laser slope gauge. The locations of the wave gauge and of the laser slope gauge are separated by few centimeters in across-wind direction. Instantaneous wind velocity was recorded simultaneously using Pitot tube. Measurements are performed at a number of fetches and for different patterns of wind velocity variation. For each case, at least 100 independent realizations were recorded for a given wind velocity variation pattern. The accumulated data sets allow calculating ensemble-averaged values of the measured parameters. Significant differences between the evolution patterns of the surface elevation and of the slope components were found. Wavelet analysis was applied to determine dominant wave frequency of the surface elevation and of the slope variation at each instant. Corresponding ensemble-averaged values acquired by different sensors were computed and compared. Analysis of the measured ensemble-averaged quantities at different fetches makes it possible to identify different stages in the wind-wave evolution and to estimate the appropriate time and length scales.

  4. Thermal-stress analysis for a wood composite blade

    NASA Technical Reports Server (NTRS)

    Fu, K. C.; Harb, A.

    1984-01-01

    A thermal-stress analysis of a wind turbine blade made of wood composite material is reported. First, the governing partial differential equation on heat conduction is derived, then, a finite element procedure using variational approach is developed for the solution of the governing equation. Thus, the temperature distribution throughout the blade is determined. Next, based on the temperature distribution, a finite element procedure using potential energy approach is applied to determine the thermal-stress distribution. A set of results is obtained through the use of a computer, which is considered to be satisfactory. All computer programs are contained in the report.

  5. Arctic sea-ice variations from time-lapse passive microwave imagery

    USGS Publications Warehouse

    Campbell, W.J.; Ramseier, R.O.; Zwally, H.J.; Gloersen, P.

    1980-01-01

    This paper presents: (1) a short historical review of the passive microwave research on sea ice which established the observational and theoretical base permitting the interpretation of the first passive microwave images of Earth obtained by the Nimbus-5 ESMR; (2) the construction of a time-lapse motion picture film of a 16-month set of serial ESMR images to aid in the formidable data analysis task; and (3) a few of the most significant findings resulting from an early analysis of these data, using selected ESMR images to illustrate these findings. ?? 1980 D. Reidel Publishing Co.

  6. Accurate estimates of 3D Ising critical exponents using the coherent-anomaly method

    NASA Astrophysics Data System (ADS)

    Kolesik, Miroslav; Suzuki, Masuo

    1995-02-01

    An analysis of the critical behavior of the three-dimensional Ising model using the coherent-anomaly method (CAM) is presented. Various sources of errors in CAM estimates of critical exponents are discussed, and an improved scheme for the CAM data analysis is tested. Using a set of mean-field type approximations based on the variational series expansion approach, accuracy comparable to the most precise conventional methods has been achieved. Our results for the critical exponents are given by α = 0.108(5), β = 0.327(4), γ = 1.237(4) and δ = 4.77(5).

  7. Thermal Inspection of a Composite Fuselage Section Using a Fixed Eigenvector Principal Component Analysis Method

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Bolduc, Sean; Harman, Rebecca

    2017-01-01

    A composite fuselage aircraft forward section was inspected with flash thermography. The fuselage section is 24 feet long and approximately 8 feet in diameter. The structure is primarily configured with a composite sandwich structure of carbon fiber face sheets with a Nomex(Trademark) honeycomb core. The outer surface area was inspected. The thermal data consisted of 477 data sets totaling in size of over 227 Gigabytes. Principal component analysis (PCA) was used to process the data sets for substructure and defect detection. A fixed eigenvector approach using a global covariance matrix was used and compared to a varying eigenvector approach. The fixed eigenvector approach was demonstrated to be a practical analysis method for the detection and interpretation of various defects such as paint thickness variation, possible water intrusion damage, and delamination damage. In addition, inspection considerations are discussed including coordinate system layout, manipulation of the fuselage section, and the manual scanning technique used for full coverage.

  8. Analysis of Variations in Hospital Use by Medicare Patients in Psro Areas, 1974-1977

    PubMed Central

    Deacon, Ronald; Lubitz, James; Gornick, Marian; Newton, Marilyn

    1979-01-01

    A study of the use of short-stay hospitals in PSRO areas by Medicare enrollees aged 65 and over for the period 1974 through 1977 revealed that discharge rates increased, average length of stay (ALOS) decreased, and days-of-care rates remained relatively constant in nearly all of the PSRO areas. The data show large variations in hospital use in PSRO areas within States and HEW regions, and suggest that factors within the area are critical determinants of hospital utilization. This study presents important implications for PSRO program policy for it suggests that factors other than physician and hospital behavior should also be considered when setting objectives for reducing misutilization and improving the quality of health care. PMID:10309054

  9. Uniting Tricholoma sulphureum and T. bufonium.

    PubMed

    Comandini, Ornella; Haug, Ingeborg; Rinaldi, Andrea C; Kuyper, Thomas W

    2004-10-01

    The taxonomic status and relationship of Tricholoma sulphureum and the similar T. bufonium were investigated using different sets of characters. These included morphological data on fruit bodies, ecological and chorological data, and analysis of the sequence data obtained for the ITS of basidiomes of different ecological and geographic origin. Moreover, the ectomycorrhizas formed by T. bufonium on Abies alba and Quercus sp. were characterised, and anatomical features compared with those of T. sulphureum mycorrhizas on coniferous and broad-leaved host trees. Our results revealed extensive ITS variation in members of the T. sulphureum group, but this variation was not correlated with morphology, ecology, or geographical distribution. We conclude that T. bufonium cannot be maintained as an autonomous taxon and should be treated as an infraspecific variant of T. sulphureum.

  10. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Maolin; Du, R.

    2005-08-01

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.

  11. A Bioinformatics Workflow for Variant Peptide Detection in Shotgun Proteomics*

    PubMed Central

    Li, Jing; Su, Zengliu; Ma, Ze-Qiang; Slebos, Robbert J. C.; Halvey, Patrick; Tabb, David L.; Liebler, Daniel C.; Pao, William; Zhang, Bing

    2011-01-01

    Shotgun proteomics data analysis usually relies on database search. However, commonly used protein sequence databases do not contain information on protein variants and thus prevent variant peptides and proteins from been identified. Including known coding variations into protein sequence databases could help alleviate this problem. Based on our recently published human Cancer Proteome Variation Database, we have created a protein sequence database that comprehensively annotates thousands of cancer-related coding variants collected in the Cancer Proteome Variation Database as well as noncancer-specific ones from the Single Nucleotide Polymorphism Database (dbSNP). Using this database, we then developed a data analysis workflow for variant peptide identification in shotgun proteomics. The high risk of false positive variant identifications was addressed by a modified false discovery rate estimation method. Analysis of colorectal cancer cell lines SW480, RKO, and HCT-116 revealed a total of 81 peptides that contain either noncancer-specific or cancer-related variations. Twenty-three out of 26 variants randomly selected from the 81 were confirmed by genomic sequencing. We further applied the workflow on data sets from three individual colorectal tumor specimens. A total of 204 distinct variant peptides were detected, and five carried known cancer-related mutations. Each individual showed a specific pattern of cancer-related mutations, suggesting potential use of this type of information for personalized medicine. Compatibility of the workflow has been tested with four popular database search engines including Sequest, Mascot, X!Tandem, and MyriMatch. In summary, we have developed a workflow that effectively uses existing genomic data to enable variant peptide detection in proteomics. PMID:21389108

  12. Monitoring variations of dimethyl sulfide and dimethylsulfoniopropionate in seawater and the atmosphere based on sequential vapor generation and ion molecule reaction mass spectrometry.

    PubMed

    Iyadomi, Satoshi; Ezoe, Kentaro; Ohira, Shin-Ichi; Toda, Kei

    2016-04-01

    To monitor the fluctuations of dimethyl sulfur compounds at the seawater/atmosphere interface, an automated system was developed based on sequential injection analysis coupled with vapor generation-ion molecule reaction mass spectrometry (SIA-VG-IMRMS). Using this analytical system, dissolved dimethyl sulfide (DMS(aq)) and dimethylsulfoniopropionate (DMSP), a precursor to DMS in seawater, were monitored together sequentially with atmospheric dimethyl sulfide (DMS(g)). A shift from the equilibrium point between DMS(aq) and DMS(g) results in the emission of DMS to the atmosphere. Atmospheric DMS emitted from seawater plays an important role as a source of cloud condensation nuclei, which influences the oceanic climate. Water samples were taken periodically and dissolved DMS(aq) was vaporized for analysis by IMRMS. After that, DMSP was hydrolyzed to DMS and acrylic acid, and analyzed in the same manner as DMS(aq). The vaporization behavior and hydrolysis of DMSP to DMS were investigated to optimize these conditions. Frequent (every 30 min) determination of the three components, DMS(aq)/DMSP (nanomolar) and DMS(g) (ppbv), was carried out by SIA-VG-IMRMS. Field analysis of the dimethyl sulfur compounds was undertaken at a coastal station, which succeeded in showing detailed variations of the compounds in a natural setting. Observed concentrations of the dimethyl sulfur compounds both in the atmosphere and seawater largely changed with time and similar variations were repeatedly observed over several days, suggesting diurnal variations in the DMS flux at the seawater/atmosphere interface.

  13. The Mouse Genomes Project: a repository of inbred laboratory mouse strain genomes.

    PubMed

    Adams, David J; Doran, Anthony G; Lilue, Jingtao; Keane, Thomas M

    2015-10-01

    The Mouse Genomes Project was initiated in 2009 with the goal of using next-generation sequencing technologies to catalogue molecular variation in the common laboratory mouse strains, and a selected set of wild-derived inbred strains. The initial sequencing and survey of sequence variation in 17 inbred strains was completed in 2011 and included comprehensive catalogue of single nucleotide polymorphisms, short insertion/deletions, larger structural variants including their fine scale architecture and landscape of transposable element variation, and genomic sites subject to post-transcriptional alteration of RNA. From this beginning, the resource has expanded significantly to include 36 fully sequenced inbred laboratory mouse strains, a refined and updated data processing pipeline, and new variation querying and data visualisation tools which are available on the project's website ( http://www.sanger.ac.uk/resources/mouse/genomes/ ). The focus of the project is now the completion of de novo assembled chromosome sequences and strain-specific gene structures for the core strains. We discuss how the assembled chromosomes will power comparative analysis, data access tools and future directions of mouse genetics.

  14. al mena: a comprehensive resource of human genetic variants integrating genomes and exomes from Arab, Middle Eastern and North African populations.

    PubMed

    Koshy, Remya; Ranawat, Anop; Scaria, Vinod

    2017-10-01

    Middle East and North Africa (MENA) encompass very unique populations, with a rich history and encompasses characteristic ethnic, linguistic and genetic diversity. The genetic diversity of MENA region has been largely unknown. The recent availability of whole-exome and whole-genome sequences from the region has made it possible to collect population-specific allele frequencies. The integration of data sets from this region would provide insights into the landscape of genetic variants in this region. We integrated genetic variants from multiple data sets systematically, available from this region to create a compendium of over 26 million genetic variations. The variants were systematically annotated and their allele frequencies in the data sets were computed and available as a web interface which enables quick query. As a proof of principle for application of the compendium for genetic epidemiology, we analyzed the allele frequencies for variants in transglutaminase 1 (TGM1) gene, associated with autosomal recessive lamellar ichthyosis. Our analysis revealed that the carrier frequency of selected variants differed widely with significant interethnic differences. To the best of our knowledge, al mena is the first and most comprehensive repertoire of genetic variations from the Arab, Middle Eastern and North African region. We hope al mena would accelerate Precision Medicine in the region.

  15. disLocate: tools to rapidly quantify local intermolecular structure to assess two-dimensional order in self-assembled systems.

    PubMed

    Bumstead, Matt; Liang, Kunyu; Hanta, Gregory; Hui, Lok Shu; Turak, Ayse

    2018-01-24

    Order classification is particularly important in photonics, optoelectronics, nanotechnology, biology, and biomedicine, as self-assembled and living systems tend to be ordered well but not perfectly. Engineering sets of experimental protocols that can accurately reproduce specific desired patterns can be a challenge when (dis)ordered outcomes look visually similar. Robust comparisons between similar samples, especially with limited data sets, need a finely tuned ensemble of accurate analysis tools. Here we introduce our numerical Mathematica package disLocate, a suite of tools to rapidly quantify the spatial structure of a two-dimensional dispersion of objects. The full range of tools available in disLocate give different insights into the quality and type of order present in a given dispersion, accessing the translational, orientational and entropic order. The utility of this package allows for researchers to extract the variation and confidence range within finite sets of data (single images) using different structure metrics to quantify local variation in disorder. Containing all metrics within one package allows for researchers to easily and rapidly extract many different parameters simultaneously, allowing robust conclusions to be drawn on the order of a given system. Quantifying the experimental trends which produce desired morphologies enables engineering of novel methods to direct self-assembly.

  16. Lujiatun Psittacosaurids: Understanding Individual and Taphonomic Variation Using 3D Geometric Morphometrics

    PubMed Central

    Hedrick, Brandon P.; Dodson, Peter

    2013-01-01

    Psittacosaurus is one of the most abundant and speciose genera in the Dinosauria, with fifteen named species. The genus is geographically and temporally widespread with large sample sizes of several of the nominal species allowing detailed analysis of intra- and interspecific variation. We present a reanalysis of three separate, coeval species within the Psittacosauridae; P. lujiatunensis, P. major, and Hongshanosaurus houi from the Lujiatun beds of the Yixian Formation, northeastern China, using three-dimensional geometric morphometrics on a sample set of thirty skulls in combination with a reevaluation of the proposed character states for each species. Using these complementary methods, we show that individual and taphonomic variation are the joint causes of a large range of variation among the skulls when they are plotted in a morphospace. Our results demonstrate that there is only one species of Psittacosaurus within the Lujiatun beds and that the three nominal species represent different taphomorphotypes of P. lujiatunensis. The wide range of geometric morphometric variation in a single species of Psittacosaurus implies that the range of variation found in other dinosaurian groups may also be related to taphonomic distortion rather than interspecific variation. As the morphospace is driven primarily by variation resulting from taphonomic distortion, this study demonstrates that the geometric morphometric approach can only be used with great caution to delineate interspecific variation in Psittacosaurus and likely other dinosaur groups without a complementary evaluation of character states. This study presents the first application of 3D geometric morphometrics to the dinosaurian morphospace and the first attempt to quantify taphonomic variation in dinosaur skulls. PMID:23950887

  17. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  18. Process Improvement of Reactive Dye Synthesis Using Six Sigma Concept

    NASA Astrophysics Data System (ADS)

    Suwanich, Thanapat; Chutima, Parames

    2017-06-01

    This research focuses on the problem occurred in the reactive dye synthesis process of a global manufacturer in Thailand which producing various chemicals for reactive dye products to supply global industries such as chemicals, textiles and garments. The product named “Reactive Blue Base” is selected in this study because it has highest demand and the current chemical yield shows a high variation, i.e. yield variation of 90.4% - 99.1% (S.D. = 2.405 and Cpk = -0.08) and average yield is 94.5% (lower than the 95% standard set by the company). The Six Sigma concept is applied aiming at increasing yield and reducing variation of this process. This approach is suitable since it provides a systematic guideline with five improvement phases (DMAIC) to effectively tackle the problem and find the appropriate parameter settings of the process. Under the new parameter settings, the process yield variation is reduced to range between 96.5% - 98.5% (S.D. = 0.525 and Cpk = 1.83) and the average yield is increased to 97.5% (higher than the 95% standard set by the company).

  19. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  20. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  1. A space-time multiscale modelling of Earth's gravity field variations

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Panet, Isabelle; Ramillien, Guillaume; Guilloux, Frédéric

    2017-04-01

    The mass distribution within the Earth varies over a wide range of spatial and temporal scales, generating variations in the Earth's gravity field in space and time. These variations are monitored by satellites as the GRACE mission, with a 400 km spatial resolution and 10 days to 1 month temporal resolution. They are expressed in the form of gravity field models, often with a fixed spatial or temporal resolution. The analysis of these models allows us to study the mass transfers within the Earth system. Here, we have developed space-time multi-scale models of the gravity field, in order to optimize the estimation of gravity signals resulting from local processes at different spatial and temporal scales, and to adapt the time resolution of the model to its spatial resolution according to the satellites sampling. For that, we first build a 4D wavelet family combining spatial Poisson wavelets with temporal Haar wavelets. Then, we set-up a regularized inversion of inter-satellites gravity potential differences in a bayesian framework, to estimate the model parameters. To build the prior, we develop a spectral analysis, localized in time and space, of geophysical models of mass transport and associated gravity variations. Finally, we test our approach to the reconstruction of space-time variations of the gravity field due to hydrology. We first consider a global distribution of observations along the orbit, from a simplified synthetic hydrology signal comprising only annual variations at large spatial scales. Then, we consider a regional distribution of observations in Africa, and a larger number of spatial and temporal scales. We test the influence of an imperfect prior and discuss our results.

  2. Support in Clinical Settings as Perceived by Nursing Students in Iran: A Qualitative Study

    PubMed Central

    Joolaee, Soodabeh; Ashghali Farahani, Mansoureh; Jafarian Amiri, Seyedeh Roghayeh; Varaei, Shokoh

    2016-01-01

    Background Although support is one of the most substantial needs of nursing students during clinical education, it is not clearly defined in the literature. Objectives The current study aimed to explore the concept of support in clinical settings as perceived by nursing students. Materials and Methods A qualitative content analysis was used to explore the meaning of student support in clinical settings. A purposive sampling with maximum variation was used to select the participants among bachelor nursing students in the nursing school of Babol University of Medical Sciences in the north of Iran. Semi-structured interviews were conducted to gather the perceptions and experiences of seventeen nursing students. Conventional content analysis was applied to analyze the data. Results In the current study, the main theme, nurturance, was emerged with seven subthemes of humanistic behavior with the student, respectful communication with students, accepting the student in the clinical setting, sustaining confidence, need based supervision, accepting the profession in the society and empowerment. Conclusions Nursing students support in the clinical education requires a nurturing care; a care that leads to the sense of worthiness and respectability in students and contributes to the improvement of their clinical abilities. PMID:27331057

  3. Multi-Omics Factor Analysis-a framework for unsupervised integration of multi-omics data sets.

    PubMed

    Argelaguet, Ricard; Velten, Britta; Arnol, Damien; Dietrich, Sascha; Zenz, Thorsten; Marioni, John C; Buettner, Florian; Huber, Wolfgang; Stegle, Oliver

    2018-06-20

    Multi-omics studies promise the improved characterization of biological processes across molecular layers. However, methods for the unsupervised integration of the resulting heterogeneous data sets are lacking. We present Multi-Omics Factor Analysis (MOFA), a computational method for discovering the principal sources of variation in multi-omics data sets. MOFA infers a set of (hidden) factors that capture biological and technical sources of variability. It disentangles axes of heterogeneity that are shared across multiple modalities and those specific to individual data modalities. The learnt factors enable a variety of downstream analyses, including identification of sample subgroups, data imputation and the detection of outlier samples. We applied MOFA to a cohort of 200 patient samples of chronic lymphocytic leukaemia, profiled for somatic mutations, RNA expression, DNA methylation and ex vivo drug responses. MOFA identified major dimensions of disease heterogeneity, including immunoglobulin heavy-chain variable region status, trisomy of chromosome 12 and previously underappreciated drivers, such as response to oxidative stress. In a second application, we used MOFA to analyse single-cell multi-omics data, identifying coordinated transcriptional and epigenetic changes along cell differentiation. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  4. Early Triassic fluctuations of the global carbon cycle: New evidence from paired carbon isotopes in the western USA basin

    NASA Astrophysics Data System (ADS)

    Caravaca, Gwénaël; Thomazo, Christophe; Vennin, Emmanuelle; Olivier, Nicolas; Cocquerez, Théophile; Escarguel, Gilles; Fara, Emmanuel; Jenks, James F.; Bylund, Kevin G.; Stephen, Daniel A.; Brayard, Arnaud

    2017-07-01

    In the aftermath of the catastrophic end-Permian mass extinction, the Early Triassic records recurrent perturbations in the carbon isotope signal, most notably during the Smithian and through the Smithian/Spathian Boundary (SSB; 1.5 myr after the Permian/Triassic boundary), which show some of the largest excursions of the Phanerozoic. The late Smithian also corresponds to major biotic turnovers and environmental changes, such as temperature fluctuations, that deeply impacted the recovery after the end-Permian mass extinction. Here we document the paired carbon isotope signal along with an analysis of the trace and major elements at the long-known Hot Springs section (southeastern Idaho, USA). This section records Early Triassic sediments from the Griesbachian-Dienerian up to the lower Spathian. We show that the organic and carbonate δ13C variations mirror the signals identified at a global scale. Particularly, the middle Smithian-SSB event represented by a negative-positive isotopic couplet is well identified and is not of diagenetic origin. We also document a positive excursion potentially corresponding to the Dienerian/Smithian Boundary. Observed Smithian-Spathian excursions are recorded similarly in both the organic and carbonate reservoirs, but the organic matter signal systematically shows unexpectedly dampened variations compared to its carbonate counterpart. Additionally, we show that variations in the net isotopic effect (i.e., Δ13C) probably resulted from a complex set of forcing parameters including either a mixing between terrestrial and marine organic matter depending on the evolution of the depositional setting, or variations in the biological fractionation. We establish that the Δ13C signal cannot be directly related to CO2-driven temperature variations at Hot Springs. Even though the carbon isotope signal mirrors the Early Triassic variations known at the global scale, the Hot Springs signal probably also reflects local influences on the carbon isotopes that are neither diagenetic nor representative of the global exogenic carbon cycle.

  5. A Discovery Resource of Rare Copy Number Variations in Individuals with Autism Spectrum Disorder

    PubMed Central

    Prasad, Aparna; Merico, Daniele; Thiruvahindrapuram, Bhooma; Wei, John; Lionel, Anath C.; Sato, Daisuke; Rickaby, Jessica; Lu, Chao; Szatmari, Peter; Roberts, Wendy; Fernandez, Bridget A.; Marshall, Christian R.; Hatchwell, Eli; Eis, Peggy S.; Scherer, Stephen W.

    2012-01-01

    The identification of rare inherited and de novo copy number variations (CNVs) in human subjects has proven a productive approach to highlight risk genes for autism spectrum disorder (ASD). A variety of microarrays are available to detect CNVs, including single-nucleotide polymorphism (SNP) arrays and comparative genomic hybridization (CGH) arrays. Here, we examine a cohort of 696 unrelated ASD cases using a high-resolution one-million feature CGH microarray, the majority of which were previously genotyped with SNP arrays. Our objective was to discover new CNVs in ASD cases that were not detected by SNP microarray analysis and to delineate novel ASD risk loci via combined analysis of CGH and SNP array data sets on the ASD cohort and CGH data on an additional 1000 control samples. Of the 615 ASD cases analyzed on both SNP and CGH arrays, we found that 13,572 of 21,346 (64%) of the CNVs were exclusively detected by the CGH array. Several of the CGH-specific CNVs are rare in population frequency and impact previously reported ASD genes (e.g., NRXN1, GRM8, DPYD), as well as novel ASD candidate genes (e.g., CIB2, DAPP1, SAE1), and all were inherited except for a de novo CNV in the GPHN gene. A functional enrichment test of gene-sets in ASD cases over controls revealed nucleotide metabolism as a potential novel pathway involved in ASD, which includes several candidate genes for follow-up (e.g., DPYD, UPB1, UPP1, TYMP). Finally, this extensively phenotyped and genotyped ASD clinical cohort serves as an invaluable resource for the next step of genome sequencing for complete genetic variation detection. PMID:23275889

  6. Quantifying Variations In Multi-parameter Models With The Photon Clean Method (PCM) And Bootstrap Methods

    NASA Astrophysics Data System (ADS)

    Carpenter, Matthew H.; Jernigan, J. G.

    2007-05-01

    We present examples of an analysis progression consisting of a synthesis of the Photon Clean Method (Carpenter, Jernigan, Brown, Beiersdorfer 2007) and bootstrap methods to quantify errors and variations in many-parameter models. The Photon Clean Method (PCM) works well for model spaces with large numbers of parameters proportional to the number of photons, therefore a Monte Carlo paradigm is a natural numerical approach. Consequently, PCM, an "inverse Monte-Carlo" method, requires a new approach for quantifying errors as compared to common analysis methods for fitting models of low dimensionality. This presentation will explore the methodology and presentation of analysis results derived from a variety of public data sets, including observations with XMM-Newton, Chandra, and other NASA missions. Special attention is given to the visualization of both data and models including dynamic interactive presentations. This work was performed under the auspices of the Department of Energy under contract No. W-7405-Eng-48. We thank Peter Beiersdorfer and Greg Brown for their support of this technical portion of a larger program related to science with the LLNL EBIT program.

  7. Pattern variation of fish fingerling abundance in the Na Thap Tidal river of Southern Thailand: 2005-2015

    NASA Astrophysics Data System (ADS)

    Donroman, T.; Chesoh, S.; Lim, A.

    2018-04-01

    This study aimed to investigate the variation patterns of fish fingerling abundance based on month, year and sampling site. Monthly collecting data set of the Na Thap tidal river of southern Thailand, were obtained from June 2005 to October 2015. The square root transformation was employed for maintaining the fingerling data normality. Factor analysis was applied for clustering number of fingerling species and multiple linear regression was used to examine the association between fingerling density and year, month and site. Results from factor analysis classified fingerling into 3 factors based on saline preference; saline water, freshwater and ubiquitous species. The results showed a statistically high significant relation between fingerling density, month, year and site. Abundance of saline water and ubiquitous fingerling density showed similar pattern. Downstream site presented highest fingerling density whereas almost of freshwater fingerling occurred in upstream. This finding confirmed that factor analysis and the general linear regression method can be used as an effective tool for predicting and monitoring wild fingerling density in order to sustain fish stock management.

  8. Variational Identification of Markovian Transition States

    NASA Astrophysics Data System (ADS)

    Martini, Linda; Kells, Adam; Covino, Roberto; Hummer, Gerhard; Buchete, Nicolae-Viorel; Rosta, Edina

    2017-07-01

    We present a method that enables the identification and analysis of conformational Markovian transition states from atomistic or coarse-grained molecular dynamics (MD) trajectories. Our algorithm is presented by using both analytical models and examples from MD simulations of the benchmark system helix-forming peptide Ala5 , and of larger, biomedically important systems: the 15-lipoxygenase-2 enzyme (15-LOX-2), the epidermal growth factor receptor (EGFR) protein, and the Mga2 fungal transcription factor. The analysis of 15-LOX-2 uses data generated exclusively from biased umbrella sampling simulations carried out at the hybrid ab initio density functional theory (DFT) quantum mechanics/molecular mechanics (QM/MM) level of theory. In all cases, our method automatically identifies the corresponding transition states and metastable conformations in a variationally optimal way, with the input of a set of relevant coordinates, by accurately reproducing the intrinsic slowest relaxation rate of each system. Our approach offers a general yet easy-to-implement analysis method that provides unique insight into the molecular mechanism and the rare but crucial (i.e., rate-limiting) transition states occurring along conformational transition paths in complex dynamical systems such as molecular trajectories.

  9. Spatio-Temporal Analysis of Smear-Positive Tuberculosis in the Sidama Zone, Southern Ethiopia

    PubMed Central

    Dangisso, Mesay Hailu; Datiko, Daniel Gemechu; Lindtjørn, Bernt

    2015-01-01

    Background Tuberculosis (TB) is a disease of public health concern, with a varying distribution across settings depending on socio-economic status, HIV burden, availability and performance of the health system. Ethiopia is a country with a high burden of TB, with regional variations in TB case notification rates (CNRs). However, TB program reports are often compiled and reported at higher administrative units that do not show the burden at lower units, so there is limited information about the spatial distribution of the disease. We therefore aim to assess the spatial distribution and presence of the spatio-temporal clustering of the disease in different geographic settings over 10 years in the Sidama Zone in southern Ethiopia. Methods A retrospective space–time and spatial analysis were carried out at the kebele level (the lowest administrative unit within a district) to identify spatial and space-time clusters of smear-positive pulmonary TB (PTB). Scan statistics, Global Moran’s I, and Getis and Ordi (Gi*) statistics were all used to help analyze the spatial distribution and clusters of the disease across settings. Results A total of 22,545 smear-positive PTB cases notified over 10 years were used for spatial analysis. In a purely spatial analysis, we identified the most likely cluster of smear-positive PTB in 192 kebeles in eight districts (RR= 2, p<0.001), with 12,155 observed and 8,668 expected cases. The Gi* statistic also identified the clusters in the same areas, and the spatial clusters showed stability in most areas in each year during the study period. The space-time analysis also detected the most likely cluster in 193 kebeles in the same eight districts (RR= 1.92, p<0.001), with 7,584 observed and 4,738 expected cases in 2003-2012. Conclusion The study found variations in CNRs and significant spatio-temporal clusters of smear-positive PTB in the Sidama Zone. The findings can be used to guide TB control programs to devise effective TB control strategies for the geographic areas characterized by the highest CNRs. Further studies are required to understand the factors associated with clustering based on individual level locations and investigation of cases. PMID:26030162

  10. Generating Vegetation Leaf Area Index Earth System Data Record from Multiple Sensors. Part 2; Implementation, Analysis and Validation

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Samanta, Arindam; Schull, Mitchell A.; Shabanov, Nikolay V.; Milesi, Cristina; Nemani, Ramajrushna R,; Knyazikhin, Yuri; Myneni, Ranga B.

    2008-01-01

    The evaluation of a new global monthly leaf area index (LAI) data set for the period July 1981 to December 2006 derived from AVHRR Normalized Difference Vegetation Index (NDVI) data is described. The physically based algorithm is detailed in the first of the two part series. Here, the implementation, production and evaluation of the data set are described. The data set is evaluated both by direct comparisons to ground data and indirectly through inter-comparisons with similar data sets. This indirect validation showed satisfactory agreement with existing LAI products, importantly MODIS, at a range of spatial scales, and significant correlations with key climate variables in areas where temperature and precipitation limit plant growth. The data set successfully reproduced well-documented spatio-temporal trends and inter-annual variations in vegetation activity in the northern latitudes and semi-arid tropics. Comparison with plot scale field measurements over homogeneous vegetation patches indicated a 7% underestimation when all major vegetation types are taken into account. The error in mean values obtained from distributions of AVHRR LAI and high-resolution field LAI maps for different biomes is within 0.5 LAI for six out of the ten selected sites. These validation exercises though limited by the amount of field data, and thus less than comprehensive, indicated satisfactory agreement between the LAI product and field measurements. Overall, the intercomparison with short-term LAI data sets, evaluation of long term trends with known variations in climate variables, and validation with field measurements together build confidence in the utility of this new 26 year LAI record for long term vegetation monitoring and modeling studies.

  11. Multi-Pulse Excitation for Underwater Analysis of Copper-Based Alloys Using a Novel Remote Laser-Induced Breakdown Spectroscopy (LIBS) System.

    PubMed

    Guirado, Salvador; Fortes, Francisco J; Laserna, J Javier

    2016-04-01

    In this work, the use of multi-pulse excitation has been evaluated as an effective solution to mitigate the preferential ablation of the most volatile elements, namely Sn, Pb, and Zn, observed during laser-induced breakdown spectroscopy (LIBS) analysis of copper-based alloys. The novel remote LIBS prototype used in this experiments featured both single-pulse (SP-LIBS) and multi-pulse excitation (MP-LIBS). The remote instrument is capable of performing chemical analysis of submersed materials up to a depth of 50 m. Laser-induced breakdown spectroscopy analysis was performed at air pressure settings simulating the conditions during a real subsea analysis. A set of five certified bronze standards with variable concentration of Cu, As, Sn, Pb, and Zn were used. In SP-LIBS, signal emission is strongly sensitive to ambient pressure. In this case, fractionation effect was observed. Multi-pulse excitation circumvents the effect of pressure over the quantitative analysis, thus avoiding the fractionation phenomena observed in single pulse LIBS. The use of copper as internal standard minimizes matrix effects and discrepancies due to variation in ablated mass. © The Author(s) 2016.

  12. Statistical tables and charts showing geochemical variation in the Mesoproterozoic Big Creek, Apple Creek, and Gunsight formations, Lemhi group, Salmon River Mountains and Lemhi Range, central Idaho

    USGS Publications Warehouse

    Lindsey, David A.; Tysdal, Russell G.; Taggart, Joseph E.

    2002-01-01

    The principal purpose of this report is to provide a reference archive for results of a statistical analysis of geochemical data for metasedimentary rocks of Mesoproterozoic age of the Salmon River Mountains and Lemhi Range, central Idaho. Descriptions of geochemical data sets, statistical methods, rationale for interpretations, and references to the literature are provided. Three methods of analysis are used: R-mode factor analysis of major oxide and trace element data for identifying petrochemical processes, analysis of variance for effects of rock type and stratigraphic position on chemical composition, and major-oxide ratio plots for comparison with the chemical composition of common clastic sedimentary rocks.

  13. Modern Era Retrospective-analysis for Research and Applications (MERRA) Global Water and Energy Budgets

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, Franklin R.; Chen, Junye

    2008-01-01

    The Modern. Era Retrospective-analysis for Research and Applications (MERRA) reanalyses has produced several years of data, on the way to a completing. the 1979-present modern satellite era. Here, we present a preliminary evaluation of those years currently available, includin g comparisons with the existing long reanalyses (ERA40, JRA25 and NCE P I and II) as well as with global data sets for the water and energy cycle Time series shows that the MERRA budgets can change with some of the variations in observing systems. We will present all terms of the budgets in MERRA including the time rates of change and analysis increments (tendency due to the analysis of observations)

  14. Variation in assessment and standard setting practices across UK undergraduate medicine and the need for a benchmark.

    PubMed

    MacDougall, Margaret

    2015-10-31

    The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students.

  15. Variation in Nephrologist Visits to Patients on Hemodialysis across Dialysis Facilities and Geographic Locations

    PubMed Central

    Tan, Kelvin B.; Winkelmayer, Wolfgang C.; Chertow, Glenn M.; Bhattacharya, Jay

    2013-01-01

    Summary Background and objectives Geographic and other variations in medical practices lead to differences in medical costs, often without a clear link to health outcomes. This work examined variation in the frequency of physician visits to patients receiving hemodialysis to measure the relative importance of provider practice patterns (including those patterns linked to geographic region) and patient health in determining visit frequency. Design, setting, participants, & measurements This work analyzed a nationally representative 2006 database of patients receiving hemodialysis in the United States. A variation decomposition analysis of the relative importance of facility, geographic region, and patient characteristics—including demographics, socioeconomic status, and indicators of health status—in explaining physician visit frequency variation was conducted. Finally, the associations between facility, geographic and patient characteristics, and provider visit frequency were measured using multivariable regression. Results Patient characteristics accounted for only 0.9% of the total visit frequency variation. Accounting for case-mix differences, patients’ hemodialysis facilities explained about 24.9% of visit frequency variation, of which 9.3% was explained by geographic region. Visit frequency was more closely associated with many facility and geographic characteristics than indicators of health status. More recent dialysis initiation and recent hospitalization were associated with decreased visit frequency. Conclusions In hemodialysis, provider visit frequency depends more on geography and facility location and characteristics than patients’ health status or acuity of illness. The magnitude of variation unrelated to patient health suggests that provider visit frequency practices do not reflect optimal management of patients on dialysis. PMID:23430207

  16. Evaluating abundance and trends in a Hawaiian avian community using state-space analysis

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.

    2016-01-01

    Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.

  17. How healthcare states matter: comparing the introduction of clinical standards in Britain and Germany.

    PubMed

    Burau, Viola; Fenton, Laura

    2009-01-01

    This paper aims to identify variation in the introduction of New Public Management reforms in healthcare and how this variation is related to country-specific healthcare states. The analysis uses the introduction of clinical standards in Britain and Germany as cases. The two countries are characterised by interesting differences in relation to the institutional set-up of healthcare states and as such present ideal cases to explore the specific ways of how healthcare states filter clinical standards as tools of a generic managerialism. Both countries have introduced clinical standards but, importantly, the substantive nature of clinical standards differs, reflecting differences in initial institutional conditions. More specifically, in Britain clinical standards have taken the form of two parallel policies, which strengthen hierarchy-based governing and redefine professional self-regulation. In Germany, by contrast, clinical standards come in one single policy, which strengthens the hybrid of network- and hierarchy-based governing and to some extent also pure hierarchy-based forms of governing. First, with its cross-country comparative focus, the analysis is able to identify systematic variations across healthcare states and the specific ways in which they impact on the introduction of New Public Management. Second, with its focus on clinical standards, the analysis deals with the governance of medical practice as one of the central areas of healthcare states.

  18. A Spatio-Temporal Approach for Global Validation and Analysis of MODIS Aerosol Products

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Chu, D. Allen; Mattoo, Shana; Kaufman, Yoram J.; Remer, Lorraine A.; Tanre, Didier; Slutsker, Ilya; Holben, Brent N.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    With the launch of the MODIS sensor on the Terra spacecraft, new data sets of the global distribution and properties of aerosol are being retrieved, and need to be validated and analyzed. A system has been put in place to generate spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of the MODIS aerosol parameters over more than 100 validation sites spread around the globe. Corresponding statistics are also computed from temporal subsets of AERONET-derived aerosol data. The means and standard deviations of identical parameters from MOMS and AERONET are compared. Although, their means compare favorably, their standard deviations reveal some influence of surface effects on the MODIS aerosol retrievals over land, especially at low aerosol loading. The direction and rate of spatial variation from MODIS are used to study the spatial distribution of aerosols at various locations either individually or comparatively. This paper introduces the methodology for generating and analyzing the data sets used by the two MODIS aerosol validation papers in this issue.

  19. Relationship between health services, socioeconomic variables and inadequate weight gain among Brazilian children.

    PubMed Central

    de Souza, A. C.; Peterson, K. E.; Cufino, E.; Gardner, J.; Craveiro, M. V.; Ascherio, A.

    1999-01-01

    This ecological analysis assessed the relative contribution of behavioural, health services and socioeconomic variables to inadequate weight gain in infants (0-11 months) and children (12-23 months) in 140 municipalities in the State of Ceara, north-east Brazil. To assess the total effect of selected variables, we fitted three unique sets of multivariate linear regression models to the prevalence of inadequate weight gain in infants and in children. The final predictive models included variables from the three sets. Findings showed that participation in growth monitoring and urbanization were inversely and significantly associated with the prevalence of inadequate weight gain in infants, accounting for 38.3% of the variation. Female illiteracy rate, participation in growth monitoring and degree of urbanization were all positively associated with prevalence of inadequate weight gain in children. Together, these factors explained 25.6% of the variation. Our results suggest that efforts to reduce the average municipality-specific female illiteracy rate, in combination with participation in growth monitoring, may be effective in reducing municipality-level prevalence of inadequate weight gain in infants and children in Ceara. PMID:10612885

  20. Relationship between health services, socioeconomic variables and inadequate weight gain among Brazilian children.

    PubMed

    de Souza, A C; Peterson, K E; Cufino, E; Gardner, J; Craveiro, M V; Ascherio, A

    1999-01-01

    This ecological analysis assessed the relative contribution of behavioural, health services and socioeconomic variables to inadequate weight gain in infants (0-11 months) and children (12-23 months) in 140 municipalities in the State of Ceara, north-east Brazil. To assess the total effect of selected variables, we fitted three unique sets of multivariate linear regression models to the prevalence of inadequate weight gain in infants and in children. The final predictive models included variables from the three sets. Findings showed that participation in growth monitoring and urbanization were inversely and significantly associated with the prevalence of inadequate weight gain in infants, accounting for 38.3% of the variation. Female illiteracy rate, participation in growth monitoring and degree of urbanization were all positively associated with prevalence of inadequate weight gain in children. Together, these factors explained 25.6% of the variation. Our results suggest that efforts to reduce the average municipality-specific female illiteracy rate, in combination with participation in growth monitoring, may be effective in reducing municipality-level prevalence of inadequate weight gain in infants and children in Ceara.

  1. Improving OCD time to solution using Signal Response Metrology

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Zhang, Xiaoxiao; Vaid, Alok; Pandev, Stilian; Sanko, Dimitry; Ramanathan, Vidya; Venkataraman, Kartik; Haupt, Ronny

    2016-03-01

    In recent technology nodes, advanced process and novel integration scheme have challenged the precision limits of conventional metrology; with critical dimensions (CD) of device reduce to sub-nanometer region. Optical metrology has proved its capability to precisely detect intricate details on the complex structures, however, conventional RCWA-based (rigorous coupled wave analysis) scatterometry has the limitations of long time-to-results and lack of flexibility to adapt to wide process variations. Signal Response Metrology (SRM) is a new metrology technique targeted to alleviate the consumption of engineering and computation resources by eliminating geometric/dispersion modeling and spectral simulation from the workflow. This is achieved by directly correlating the spectra acquired from a set of wafers with known process variations encoded. In SPIE 2015, we presented the results of SRM application in lithography metrology and control [1], accomplished the mission of setting up a new measurement recipe of focus/dose monitoring in hours. This work will demonstrate our recent field exploration of SRM implementation in 20nm technology and beyond, including focus metrology for scanner control; post etch geometric profile measurement, and actual device profile metrology.

  2. Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors

    NASA Astrophysics Data System (ADS)

    Chen, Liangyuan

    2018-03-01

    The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.

  3. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A method to eliminate the influence of incident light variations in spectral analysis

    NASA Astrophysics Data System (ADS)

    Luo, Yongshun; Li, Gang; Fu, Zhigang; Guan, Yang; Zhang, Shengzhao; Lin, Ling

    2018-06-01

    The intensity of the light source and consistency of the spectrum are the most important factors influencing the accuracy in quantitative spectrometric analysis. An efficient "measuring in layer" method was proposed in this paper to limit the influence of inconsistencies in the intensity and spectrum of the light source. In order to verify the effectiveness of this method, a light source with a variable intensity and spectrum was designed according to Planck's law and Wien's displacement law. Intra-lipid samples with 12 different concentrations were prepared and divided into modeling sets and prediction sets according to different incident lights and solution concentrations. The spectra of each sample were measured with five different light intensities. The experimental results showed that the proposed method was effective in eliminating the influence caused by incident light changes and was more effective than normalized processing.

  5. A comment on measuring the Hurst exponent of financial time series

    NASA Astrophysics Data System (ADS)

    Couillard, Michel; Davison, Matt

    2005-03-01

    A fundamental hypothesis of quantitative finance is that stock price variations are independent and can be modeled using Brownian motion. In recent years, it was proposed to use rescaled range analysis and its characteristic value, the Hurst exponent, to test for independence in financial time series. Theoretically, independent time series should be characterized by a Hurst exponent of 1/2. However, finite Brownian motion data sets will always give a value of the Hurst exponent larger than 1/2 and without an appropriate statistical test such a value can mistakenly be interpreted as evidence of long term memory. We obtain a more precise statistical significance test for the Hurst exponent and apply it to real financial data sets. Our empirical analysis shows no long-term memory in some financial returns, suggesting that Brownian motion cannot be rejected as a model for price dynamics.

  6. Automated X-Ray Diffraction of Irradiated Materials

    DOE PAGES

    Rodman, John; Lin, Yuewei; Sprouster, David; ...

    2017-10-26

    Synchrotron-based X-ray diffraction (XRD) and small-angle Xray scattering (SAXS) characterization techniques used on unirradiated and irradiated reactor pressure vessel steels yield large amounts of data. Machine learning techniques, including PCA, offer a novel method of analyzing and visualizing these large data sets in order to determine the effects of chemistry and irradiation conditions on the formation of radiation induced precipitates. In order to run analysis on these data sets, preprocessing must be carried out to convert the data to a usable format and mask the 2-D detector images to account for experimental variations. Once the data has been preprocessed, itmore » can be organized and visualized using principal component analysis (PCA), multi-dimensional scaling, and k-means clustering. In conclusion, from these techniques, it is shown that sample chemistry has a notable effect on the formation of the radiation induced precipitates in reactor pressure vessel steels.« less

  7. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rankmore » impacts both overcompleteness and sparsity.« less

  8. Practical Methods for the Analysis of Voltage Collapse in Electric Power Systems: a Stationary Bifurcations Viewpoint.

    NASA Astrophysics Data System (ADS)

    Jean-Jumeau, Rene

    1993-03-01

    Voltage collapse (VC) is generally caused by either of two types of system disturbances: load variations and contingencies. In this thesis, we study VC resulting from load variations. This is termed static voltage collapse. This thesis deals with this type of voltage collapse in electrical power systems by using a stationary bifurcations viewpoint by associating it with the occurrence of saddle node bifurcations (SNB) in the system. Approximate models are generically used in most VC analyses. We consider the validity of these models for the study of SNB and, thus, of voltage collapse. We justify the use of saddle node bifurcation as a model for VC in power systems. In particular, we prove that this leads to definition of a model and--since load demand is used as a parameter for that model--of a mode of parameterization of that model in order to represent actual power demand variations within the power system network. Ill-conditioning of the set of nonlinear equations defining a dynamical system is a generic occurence near the SNB point. We suggest a reparameterization of the set of nonlinear equations which allows to avoid this problem. A new indicator for the proximity of voltage collapse, the voltage collapse index (VCI), is developed. A new (n + 1)-dimensional set of characteristic equations for the computation of the exact SNB point, replacing the standard (2n + 1)-dimensional one is presented for general parameter -dependent nonlinear dynamical systems. These results are then applied to electric power systems for the analysis and prediction of voltage collapse. The new methods offer the potential of faster computation and greater flexibility. For reasons of theoretical development and clarity, the preceding methodologies are developed under the assumption of the absence of constraints on the system parameters and states, and the full differentiability of the functions defining the power system model. In the latter part of this thesis, we relax these assumptions in order to develop a framework and new formulation for application of the tools previously developed for the analysis and prediction of voltage collapse in practical power system models which include numerous constraints and discontinuities. Illustrations and numerical simulations throughout the thesis support our results.

  9. Exploring and Analyzing Climate Variations Online by Using MERRA-2 data at GES DISC

    NASA Astrophysics Data System (ADS)

    Shen, S.; Ostrenga, D.; Vollmer, B.; Kempler, S.

    2016-12-01

    NASA Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure) (http://giovanni.sci.gsfc.nasa.gov/giovanni/) is a web-based data visualization and analysis system developed by the Goddard Earth Sciences Data and Information Services Center (GES DISC). Current data analysis functions include Lat-Lon map, time series, scatter plot, correlation map, difference, cross-section, vertical profile, and animation etc. The system enables basic statistical analysis and comparisons of multiple variables. This web-based tool facilitates data discovery, exploration and analysis of large amount of global and regional remote sensing and model data sets from a number of NASA data centers. Recently, long term global assimilated atmospheric, land, and ocean data have been integrated into the system that enables quick exploration and analysis of climate data without downloading, and preprocessing the data. Example data include climate reanalysis from NASA Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) which provides data beginning 1980 to present; land data from NASA Global Land Data Assimilation System (GLDAS) which assimilates data from 1948 to 2012; as well as ocean biological data from NASA Ocean Biogeochemical Model (NOBM) which assimilates data from 1998 to 2012. This presentation, using surface air temperature, precipitation, ozone, and aerosol, etc. from MERRA-2, demonstrates climate variation analysis with Giovanni at selected regions.

  10. Exploring and Analyzing Climate Variations Online by Using NASA MERRA-2 Data at GES DISC

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Ostrenga, Dana M.; Vollmer, Bruce E.; Kempler, Steven J.

    2016-01-01

    NASA Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) (http:giovanni.sci.gsfc.nasa.govgiovanni) is a web-based data visualization and analysis system developed by the Goddard Earth Sciences Data and Information Services Center (GES DISC). Current data analysis functions include Lat-Lon map, time series, scatter plot, correlation map, difference, cross-section, vertical profile, and animation etc. The system enables basic statistical analysis and comparisons of multiple variables. This web-based tool facilitates data discovery, exploration and analysis of large amount of global and regional remote sensing and model data sets from a number of NASA data centers. Long term global assimilated atmospheric, land, and ocean data have been integrated into the system that enables quick exploration and analysis of climate data without downloading, preprocessing, and learning data. Example data include climate reanalysis data from NASA Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) which provides data beginning in 1980 to present; land data from NASA Global Land Data Assimilation System (GLDAS), which assimilates data from 1948 to 2012; as well as ocean biological data from NASA Ocean Biogeochemical Model (NOBM), which provides data from 1998 to 2012. This presentation, using surface air temperature, precipitation, ozone, and aerosol, etc. from MERRA-2, demonstrates climate variation analysis with Giovanni at selected regions.

  11. Next generation sequencing to dissect the genetic architecture of KNG1 and F11 loci using factor XI levels as an intermediate phenotype of thrombosis.

    PubMed

    Martin-Fernandez, Laura; Gavidia-Bovadilla, Giovana; Corrales, Irene; Brunel, Helena; Ramírez, Lorena; López, Sonia; Souto, Juan Carlos; Vidal, Francisco; Soria, José Manuel

    2017-01-01

    Venous thromboembolism is a complex disease with a high heritability. There are significant associations among Factor XI (FXI) levels and SNPs in the KNG1 and F11 loci. Our aim was to identify the genetic variation of KNG1 and F11 that might account for the variability of FXI levels. The KNG1 and F11 loci were sequenced completely in 110 unrelated individuals from the GAIT-2 (Genetic Analysis of Idiopathic Thrombophilia 2) Project using Next Generation Sequencing on an Illumina MiSeq. The GAIT-2 Project is a study of 935 individuals in 35 extended Spanish families selected through a proband with idiopathic thrombophilia. Among the 110 individuals, a subset of 40 individuals was chosen as a discovery sample for identifying variants. A total of 762 genetic variants were detected. Several significant associations were established among common variants and low-frequency variants sets in KNG1 and F11 with FXI levels using the PLINK and SKAT packages. Among these associations, those of rs710446 and five low-frequency variant sets in KNG1 with FXI level variation were significant after multiple testing correction and permutation. Also, two putative pathogenic mutations related to high and low FXI levels were identified by data filtering and in silico predictions. This study of KNG1 and F11 loci should help to understand the connection between genotypic variation and variation in FXI levels. The functional genetic variants should be useful as markers of thromboembolic risk.

  12. Genome-wide association study identifies SNPs in the MHC class II loci that are associated with self-reported history of whooping cough.

    PubMed

    McMahon, George; Ring, Susan M; Davey-Smith, George; Timpson, Nicholas J

    2015-10-15

    Whooping cough is currently seeing resurgence in countries despite high vaccine coverage. There is considerable variation in subject-specific response to infection and vaccine efficacy, but little is known about the role of human genetics. We carried out a case-control genome-wide association study of adult or parent-reported history of whooping cough in two cohorts from the UK: the ALSPAC cohort and the 1958 British Birth Cohort (815/758 cases and 6341/4308 controls, respectively). We also imputed HLA alleles using dense SNP data in the MHC region and carried out gene-based and gene-set tests of association and estimated the amount of additive genetic variation explained by common SNPs. We observed a novel association at SNPs in the MHC class II region in both cohorts [lead SNP rs9271768 after meta-analysis, odds ratio [95% confidence intervals (CIs)] 1.47 (1.35, 1.6), P-value 1.21E - 18]. Multiple strong associations were also observed at alleles at the HLA class II loci. The majority of these associations were explained by the lead SNP rs9271768. Gene-based and gene-set tests and estimates of explainable common genetic variation could not establish the presence of additional associations in our sample. Genetic variation at the MHC class II region plays a role in susceptibility to whooping cough. These findings provide additional perspective on mechanisms of whooping cough infection and vaccine efficacy. © The Author 2015. Published by Oxford University Press.

  13. Evaluation of multiple forcing data sets for precipitation and shortwave radiation over major land areas of China

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Lu, Hui; Yang, Kun; He, Jie; Wang, Wei; Wright, Jonathon S.; Li, Chengwei; Han, Menglei; Li, Yishan

    2017-11-01

    Precipitation and shortwave radiation play important roles in climatic, hydrological and biogeochemical cycles. Several global and regional forcing data sets currently provide historical estimates of these two variables over China, including the Global Land Data Assimilation System (GLDAS), the China Meteorological Administration (CMA) Land Data Assimilation System (CLDAS) and the China Meteorological Forcing Dataset (CMFD). The CN05.1 precipitation data set, a gridded analysis based on CMA gauge observations, also provides high-resolution historical precipitation data for China. In this study, we present an intercomparison of precipitation and shortwave radiation data from CN05.1, CMFD, CLDAS and GLDAS during 2008-2014. We also validate all four data sets against independent ground station observations. All four forcing data sets capture the spatial distribution of precipitation over major land areas of China, although CLDAS indicates smaller annual-mean precipitation amounts than CN05.1, CMFD or GLDAS. Time series of precipitation anomalies are largely consistent among the data sets, except for a sudden decrease in CMFD after August 2014. All forcing data indicate greater temporal variations relative to the mean in dry regions than in wet regions. Validation against independent precipitation observations provided by the Ministry of Water Resources (MWR) in the middle and lower reaches of the Yangtze River indicates that CLDAS provides the most realistic estimates of spatiotemporal variability in precipitation in this region. CMFD also performs well with respect to annual mean precipitation, while GLDAS fails to accurately capture much of the spatiotemporal variability and CN05.1 contains significant high biases relative to the MWR observations. Estimates of shortwave radiation from CMFD are largely consistent with station observations, while CLDAS and GLDAS greatly overestimate shortwave radiation. All three forcing data sets capture the key features of the spatial distribution, but estimates from CLDAS and GLDAS are systematically higher than those from CMFD over most of mainland China. Based on our evaluation metrics, CLDAS slightly outperforms GLDAS. CLDAS is also closer than GLDAS to CMFD with respect to temporal variations in shortwave radiation anomalies, with substantial differences among the time series. Differences in temporal variations are especially pronounced south of 34° N. Our findings provide valuable guidance for a variety of stakeholders, including land-surface modelers and data providers.

  14. Data Sharing: Convert Challenges into Opportunities.

    PubMed

    Figueiredo, Ana Sofia

    2017-01-01

    Initiatives for sharing research data are opportunities to increase the pace of knowledge discovery and scientific progress. The reuse of research data has the potential to avoid the duplication of data sets and to bring new views from multiple analysis of the same data set. For example, the study of genomic variations associated with cancer profits from the universal collection of such data and helps in selecting the most appropriate therapy for a specific patient. However, data sharing poses challenges to the scientific community. These challenges are of ethical, cultural, legal, financial, or technical nature. This article reviews the impact that data sharing has in science and society and presents guidelines to improve the efficient sharing of research data.

  15. Data Sharing: Convert Challenges into Opportunities

    PubMed Central

    Figueiredo, Ana Sofia

    2017-01-01

    Initiatives for sharing research data are opportunities to increase the pace of knowledge discovery and scientific progress. The reuse of research data has the potential to avoid the duplication of data sets and to bring new views from multiple analysis of the same data set. For example, the study of genomic variations associated with cancer profits from the universal collection of such data and helps in selecting the most appropriate therapy for a specific patient. However, data sharing poses challenges to the scientific community. These challenges are of ethical, cultural, legal, financial, or technical nature. This article reviews the impact that data sharing has in science and society and presents guidelines to improve the efficient sharing of research data. PMID:29270401

  16. An approach to analyze the breast tissues in infrared images using nonlinear adaptive level sets and Riesz transform features.

    PubMed

    Prabha, S; Suganthi, S S; Sujatha, C M

    2015-01-01

    Breast thermography is a potential imaging method for the early detection of breast cancer. The pathological conditions can be determined by measuring temperature variations in the abnormal breast regions. Accurate delineation of breast tissues is reported as a challenging task due to inherent limitations of infrared images such as low contrast, low signal to noise ratio and absence of clear edges. Segmentation technique is attempted to delineate the breast tissues by detecting proper lower breast boundaries and inframammary folds. Characteristic features are extracted to analyze the asymmetrical thermal variations in normal and abnormal breast tissues. An automated analysis of thermal variations of breast tissues is attempted using nonlinear adaptive level sets and Riesz transform. Breast thermal images are initially subjected to Stein's unbiased risk estimate based orthonormal wavelet denoising. These denoised images are enhanced using contrast-limited adaptive histogram equalization method. The breast tissues are then segmented using non-linear adaptive level set method. The phase map of enhanced image is integrated into the level set framework for final boundary estimation. The segmented results are validated against the corresponding ground truth images using overlap and regional similarity metrics. The segmented images are further processed with Riesz transform and structural texture features are derived from the transformed coefficients to analyze pathological conditions of breast tissues. Results show that the estimated average signal to noise ratio of denoised images and average sharpness of enhanced images are improved by 38% and 6% respectively. The interscale consideration adopted in the denoising algorithm is able to improve signal to noise ratio by preserving edges. The proposed segmentation framework could delineate the breast tissues with high degree of correlation (97%) between the segmented and ground truth areas. Also, the average segmentation accuracy and sensitivity are found to be 98%. Similarly, the maximum regional overlap between segmented and ground truth images obtained using volume similarity measure is observed to be 99%. Directionality as a feature, showed a considerable difference between normal and abnormal tissues which is found to be 11%. The proposed framework for breast thermal image analysis that is aided with necessary preprocessing is found to be useful in assisting the early diagnosis of breast abnormalities.

  17. A multifactorial analysis of obesity as CVD risk factor: Use of neural network based methods in a nutrigenetics context

    PubMed Central

    2010-01-01

    Background Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. Results PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. Conclusions The ANN based methods revealed factors that interactively contribute to obesity trait and provided predictive models with a promising generalization ability. In general, results showed that ANNs and their hybrids can provide useful tools for the study of complex traits in the context of nutrigenetics. PMID:20825661

  18. Adaptation of the Practice Environment Scale for military nurses: a psychometric analysis.

    PubMed

    Swiger, Pauline A; Raju, Dheeraj; Breckenridge-Sproat, Sara; Patrician, Patricia A

    2017-09-01

    The aim of this study was to confirm the psychometric properties of Practice Environment Scale of the Nursing Work Index in a military population. This study also demonstrates association rule analysis, a contemporary exploratory technique. One of the instruments most commonly used to evaluate the nursing practice environment is the Practice Environment Scale of the Nursing Work Index. Although the instrument has been widely used, the reliability, validity and individual item function are not commonly evaluated. Gaps exist with regard to confirmatory evaluation of the subscale factors, individual item analysis and evaluation in the outpatient setting and with non-registered nursing staff. This was a secondary data analysis of existing survey data. Multiple psychometric methods were used for this analysis using survey data collected in 2014. First, descriptive analyses were conducted, including exploration using association rules. Next, internal consistency was tested and confirmatory factor analysis was performed to test the factor structure. The specified factor structure did not hold; therefore, exploratory factor analysis was performed. Finally, item analysis was executed using item response theory. The differential item functioning technique allowed the comparison of responses by care setting and nurse type. The results of this study indicate that responses differ between groups and that several individual items could be removed without altering the psychometric properties of the instrument. The instrument functions moderately well in a military population; however, researchers may want to consider nurse type and care setting during analysis to identify any meaningful variation in responses. © 2017 John Wiley & Sons Ltd.

  19. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  20. Organizational performance impacting patient satisfaction in Ontario hospitals: a multilevel analysis.

    PubMed

    Koné Péfoyo, Anna J; Wodchis, Walter P

    2013-12-05

    Patient satisfaction in health care constitutes an important component of organizational performance in the hospital setting. Satisfaction measures have been developed and used to evaluate and improve hospital performance, quality of care and physician practice. In order to direct improvement strategies, it is necessary to evaluate both individual and organizational factors that can impact patients' perception of care. The study aims were to determine the dimensions of patient satisfaction, and to analyze the individual and organizational determinants of satisfaction dimensions in hospitals. We used patient and hospital survey data as well as administrative data collected for a 2008 public hospital report in Ontario, Canada. We evaluated the clustering of patient survey items with exploratory factor analysis and derived plausible dimensions of satisfaction. A two-level multivariate model was fitted to analyze the determinants of satisfaction. We found eight satisfaction factors, with acceptable to good level of loadings and good reliability. More than 95% of variation in patient satisfaction scores was attributable to patient-level variation, with less than 5% attributable to hospital-level variation. The hierarchical models explain 5 to 17% of variation at the patient level and up to 52% of variation between hospitals. Individual patient characteristics had the strongest association with all dimensions of satisfaction. Few organizational performance indicators are associated with patient satisfaction and significant determinants differ according to the satisfaction dimension. The research findings highlight the importance of adjusting for both patient-level and organization-level characteristics when evaluating patient satisfaction. Better understanding and measurement of organization-level activities and processes associated with patient satisfaction could contribute to improved satisfaction ratings and care quality.

  1. Continuous monitoring of setting and hardening of mortar using FBG sensors

    NASA Astrophysics Data System (ADS)

    Lima, H.; Ribeiro, R.; Nogueira, R.; Silva, L.; Abe, I.; Pinto, J. L.

    2007-05-01

    The use of fibre Bragg grating sensors to study mortars' dimensional variations during the setting process is reported. When determining a mortar's potential to fissure, it's important to know its total retraction. This means it is necessary to know not only the mortar's retraction after hardened, but also to know how much it retracts during the plastic phase. This work presents a technique which allows to measure dimensional variations, either expansion or retraction, during the whole setting process. Temperature and strain evolution during both plastic and hardened phase of the mortar were obtained, allowing the determination of dimensional variations and setting times. Due to its high-speed, ease of implementation and low operation costs, this technique will allow to get a deeper knowledge of the effects of several additives on the mortar's behaviour, allowing to improve its mechanical properties through the determination of the proper chemical composition.

  2. HIV-1 sequence variation between isolates from mother-infant transmission pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wike, C.M.; Daniels, M.R.; Furtado, M.

    1991-12-31

    To examine the sequence diversity of human immunodeficiency virus type 1 (HIV-1) between known transmission sets, sequences from the V3 and V4-V5 region of the env gene from 4 mother-infant pairs were analyzed. The mean interpatient sequence variation between isolates from linked mother-infant pairs was comparable to the sequence diversity found between isolates from other close contacts. The mean intrapatient variation was significantly less in the infants` isolates then the isolates from both their mothers and other characterized intrapatient sequence sets. In addition, a distinct and characteristic difference in the glycosylation pattern preceding the V3 loop was found between eachmore » linked transmission pair. These findings indicate that selection of specific genotypic variants, which may play a role in some direct transmission sets, and the duration of infection are important factors in the degree of diversity seen between the sequence sets.« less

  3. Cubic Zig-Zag Enrichment of the Classical Kirchhoff Kinematics for Laminated and Sandwich Plates

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2012-01-01

    A detailed anaylsis and examples are presented that show how to enrich the kinematics of classical Kirchhoff plate theory by appending them with a set of continuous piecewise-cubic functions. This analysis is used to obtain functions that contain the effects of laminate heterogeneity and asymmetry on the variations of the inplane displacements and transverse shearing stresses, for use with a {3, 0} plate theory in which these distributions are specified apriori. The functions used for the enrichment are based on the improved zig-zag plate theory presented recently by Tessler, Di Scuva, and Gherlone. With the approach presented herein, the inplane displacements are represented by a set of continuous piecewise-cubic functions, and the transverse shearing stresses and strains are represented by a set of piecewise-quadratic functions that are discontinuous at the ply interfaces.

  4. Partial differential equation transform — Variational formulation and Fourier analysis

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    Nonlinear partial differential equation (PDE) models are established approaches for image/signal processing, data analysis and surface construction. Most previous geometric PDEs are utilized as low-pass filters which give rise to image trend information. In an earlier work, we introduced mode decomposition evolution equations (MoDEEs), which behave like high-pass filters and are able to systematically provide intrinsic mode functions (IMFs) of signals and images. Due to their tunable time-frequency localization and perfect reconstruction, the operation of MoDEEs is called a PDE transform. By appropriate selection of PDE transform parameters, we can tune IMFs into trends, edges, textures, noise etc., which can be further utilized in the secondary processing for various purposes. This work introduces the variational formulation, performs the Fourier analysis, and conducts biomedical and biological applications of the proposed PDE transform. The variational formulation offers an algorithm to incorporate two image functions and two sets of low-pass PDE operators in the total energy functional. Two low-pass PDE operators have different signs, leading to energy disparity, while a coupling term, acting as a relative fidelity of two image functions, is introduced to reduce the disparity of two energy components. We construct variational PDE transforms by using Euler-Lagrange equation and artificial time propagation. Fourier analysis of a simplified PDE transform is presented to shed light on the filter properties of high order PDE transforms. Such an analysis also offers insight on the parameter selection of the PDE transform. The proposed PDE transform algorithm is validated by numerous benchmark tests. In one selected challenging example, we illustrate the ability of PDE transform to separate two adjacent frequencies of sin(x) and sin(1.1x). Such an ability is due to PDE transform’s controllable frequency localization obtained by adjusting the order of PDEs. The frequency selection is achieved either by diffusion coefficients or by propagation time. Finally, we explore a large number of practical applications to further demonstrate the utility of proposed PDE transform. PMID:22207904

  5. Spatial variation of volcanic rock geochemistry in the Virunga Volcanic Province: Statistical analysis of an integrated database

    NASA Astrophysics Data System (ADS)

    Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2017-10-01

    We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.

  6. Diagnostic Value of Run Chart Analysis: Using Likelihood Ratios to Compare Run Chart Rules on Simulated Data Series

    PubMed Central

    Anhøj, Jacob

    2015-01-01

    Run charts are widely used in healthcare improvement, but there is little consensus on how to interpret them. The primary aim of this study was to evaluate and compare the diagnostic properties of different sets of run chart rules. A run chart is a line graph of a quality measure over time. The main purpose of the run chart is to detect process improvement or process degradation, which will turn up as non-random patterns in the distribution of data points around the median. Non-random variation may be identified by simple statistical tests including the presence of unusually long runs of data points on one side of the median or if the graph crosses the median unusually few times. However, there is no general agreement on what defines “unusually long” or “unusually few”. Other tests of questionable value are frequently used as well. Three sets of run chart rules (Anhoej, Perla, and Carey rules) have been published in peer reviewed healthcare journals, but these sets differ significantly in their sensitivity and specificity to non-random variation. In this study I investigate the diagnostic values expressed by likelihood ratios of three sets of run chart rules for detection of shifts in process performance using random data series. The study concludes that the Anhoej rules have good diagnostic properties and are superior to the Perla and the Carey rules. PMID:25799549

  7. Optimization of the Divergent method for genotyping single nucleotide variations using SYBR Green-based single-tube real-time PCR.

    PubMed

    Gentilini, Fabio; Turba, Maria E

    2014-01-01

    A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Autoregressive harmonic analysis of the earth's polar motion using homogeneous international latitude service data

    NASA Astrophysics Data System (ADS)

    Fong Chao, B.

    1983-12-01

    The homogeneous set of 80-year-long (1900-1979) International Latitude Service (ILS) polar motion data is analyzed using the autoregressive method (Chao and Gilbert, 1980) which resolves and produces estimates for the complex frequency (or frequency and Q) and complex amplitude (or amplitude and phase) of each harmonic component in the data. Principal conclusion of this analysis are that (1) the ILS data support the multiple-component hypothesis of the Chandler wobble (it is found that the Chandler wobble can be adequately modeled as a linear combination of four (coherent) harmonic components, each of which represents a steady, nearly circular, prograte motion, a behavior that is inconsistent with the hypothesis of a single Chandler period excited in a temporally and/or spatially random fashion). (2) the four-component Chandler wobble model ``explains'' the apparent phase reversal during 1920-1940 and the pre-1950 empirical period-amplitude relation, (3) the annual wobble is shown to be rather stationary over the years both in amplitude and in phase and no evidence is found to support the large variations reported by earlier investigations. (4) the Markowitz wobble is found to support the large variations reported by earlier investigations. (4) the Markowitz wobble is found to be marginally retrograde and appears to have a complicated behavior which cannot be resolved because of the shortness of the data set.

  9. Direct energy balance based active disturbance rejection control for coal-fired power plant.

    PubMed

    Sun, Li; Hua, Qingsong; Li, Donghai; Pan, Lei; Xue, Yali; Lee, Kwang Y

    2017-09-01

    The conventional direct energy balance (DEB) based PI control can fulfill the fundamental tracking requirements of the coal-fired power plant. However, it is challenging to deal with the cases when the coal quality variation is present. To this end, this paper introduces the active disturbance rejection control (ADRC) to the DEB structure, where the coal quality variation is deemed as a kind of unknown disturbance that can be estimated and mitigated promptly. Firstly, the nonlinearity of a recent power plant model is analyzed based on the gap metric, which provides guidance on how to set the pressure set-point in line with the power demand. Secondly, the approximate decoupling effect of the DEB structure is analyzed based on the relative gain analysis in frequency domain. Finally, the synthesis of the DEB based ADRC control system is carried out based on multi-objective optimization. The optimized ADRC results show that the integrated absolute error (IAE) indices of the tracking performances in both loops can be simultaneously improved, in comparison with the DEB based PI control and H ∞ control system. The regulation performance in the presence of the coal quality variation is significantly improved under the ADRC control scheme. Moreover, the robustness of the proposed strategy is shown comparable with the H ∞ control. Copyright © 2017. Published by Elsevier Ltd.

  10. Variations in the Prevalence of Obesity Among European Countries, and a Consideration of Possible Causes

    PubMed Central

    Blundell, John E.; Baker, Jennifer Lyn; Boyland, Emma; Blaak, Ellen; Charzewska, Jadwiga; de Henauw, Stefaan; Frühbeck, Gema; Gonzalez-Gross, Marcela; Hebebrand, Johannes; Holm, Lotte; Kriaucioniene, Vilma; Lissner, Lauren; Oppert, Jean-Michel; Schindler, Karin; Silva, Analiza Mónica; Woodward, Euan

    2017-01-01

    Over the last 10 years the prevalence of obesity across the European continent has in general been rising. With the exception of a few countries where a levelling-off can be perceived, albeit at a high level, this upward trend seems likely to continue. However, considerable country to country variation is noticeable, with the proportion of people with obesity varying by 10% or more. This variation is intriguing and suggests the existence of different profiles of risk or protection factors operating in different countries. The identification of such protection factors could indicate suitable targets for interventions to help manage the obesity epidemic in Europe. This report is the output of a 2-day workshop on the ‘Diversity of Obesity in Europe’. The workshop included 14 delegates from 12 different European countries. This report contains the contributions and discussions of the materials and viewpoints provided by these 14 experts; it is not the output of a single mind. However, such is the nature of scientific analysis regarding obesity that it is possible that a different set of 14 experts may have come to a different set of conclusions. Therefore the report should not be seen as a definitive statement of a stable situation. Rather it is a focus for discussion and comment, and a vehicle to drive forward further understanding and management of obesity in Europe. PMID:28190010

  11. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  12. Optomechanical design software for segmented mirrors

    NASA Astrophysics Data System (ADS)

    Marrero, Juan

    2016-08-01

    The software package presented in this paper, still under development, was born to help analyzing the influence of the many parameters involved in the design of a large segmented mirror telescope. In summary, it is a set of tools which were added to a common framework as they were needed. Great emphasis has been made on the graphical presentation, as scientific visualization nowadays cannot be conceived without the use of a helpful 3d environment, showing the analyzed system as close to reality as possible. Use of third party software packages is limited to ANSYS, which should be available in the system only if the FEM results are needed. Among the various functionalities of the software, the next ones are worth mentioning here: automatic 3d model construction of a segmented mirror from a set of parameters, geometric ray tracing, automatic 3d model construction of a telescope structure around the defined mirrors from a set of parameters, segmented mirror human access assessment, analysis of integration tolerances, assessment of segments collision, structural deformation under gravity and thermal variation, mirror support system analysis including warping harness mechanisms, etc.

  13. Motion cues that make an impression: Predicting perceived personality by minimal motion information.

    PubMed

    Koppensteiner, Markus

    2013-11-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information.

  14. Motion cues that make an impression☆

    PubMed Central

    Koppensteiner, Markus

    2013-01-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information. PMID:24223432

  15. Assessment of local variability by high-throughput e-beam metrology for prediction of patterning defect probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Fuming; Hunsche, Stefan; Anunciado, Roy; Corradi, Antonio; Tien, Hung Yu; Tang, Peng; Wei, Junwei; Wang, Yongjun; Fang, Wei; Wong, Patrick; van Oosten, Anton; van Ingen Schenau, Koen; Slachter, Bram

    2018-03-01

    We present an experimental study of pattern variability and defectivity, based on a large data set with more than 112 million SEM measurements from an HMI high-throughput e-beam tool. The test case is a 10nm node SRAM via array patterned with a DUV immersion LELE process, where we see a variation in mean size and litho sensitivities between different unique via patterns that leads to a seemingly qualitative differences in defectivity. The large available data volume enables further analysis to reliably distinguish global and local CDU variations, including a breakdown into local systematics and stochastics. A closer inspection of the tail end of the distributions and estimation of defect probabilities concludes that there is a common defect mechanism and defect threshold despite the observed differences of specific pattern characteristics. We expect that the analysis methodology can be applied for defect probability modeling as well as general process qualification in the future.

  16. A multivariate variational objective analysis-assimilation method. Part 2: Case study results with and without satellite data

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Kidder, Stanley Q.; Scott, Robert W.

    1988-01-01

    The variational multivariate assimilation method described in a companion paper by Achtemeier and Ochs is applied to conventional and conventional plus satellite data. Ground-based and space-based meteorological data are weighted according to the respective measurement errors and blended into a data set that is a solution of numerical forms of the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation for a dry atmosphere. The analyses serve first, to evaluate the accuracy of the model, and second to contrast the analyses with and without satellite data. Evaluation criteria measure the extent to which: (1) the assimilated fields satisfy the dynamical constraints, (2) the assimilated fields depart from the observations, and (3) the assimilated fields are judged to be realistic through pattern analysis. The last criterion requires that the signs, magnitudes, and patterns of the hypersensitive vertical velocity and local tendencies of the horizontal velocity components be physically consistent with respect to the larger scale weather systems.

  17. Analysis of protein-coding genetic variation in 60,706 humans.

    PubMed

    Lek, Monkol; Karczewski, Konrad J; Minikel, Eric V; Samocha, Kaitlin E; Banks, Eric; Fennell, Timothy; O'Donnell-Luria, Anne H; Ware, James S; Hill, Andrew J; Cummings, Beryl B; Tukiainen, Taru; Birnbaum, Daniel P; Kosmicki, Jack A; Duncan, Laramie E; Estrada, Karol; Zhao, Fengmei; Zou, James; Pierce-Hoffman, Emma; Berghout, Joanne; Cooper, David N; Deflaux, Nicole; DePristo, Mark; Do, Ron; Flannick, Jason; Fromer, Menachem; Gauthier, Laura; Goldstein, Jackie; Gupta, Namrata; Howrigan, Daniel; Kiezun, Adam; Kurki, Mitja I; Moonshine, Ami Levy; Natarajan, Pradeep; Orozco, Lorena; Peloso, Gina M; Poplin, Ryan; Rivas, Manuel A; Ruano-Rubio, Valentin; Rose, Samuel A; Ruderfer, Douglas M; Shakir, Khalid; Stenson, Peter D; Stevens, Christine; Thomas, Brett P; Tiao, Grace; Tusie-Luna, Maria T; Weisburd, Ben; Won, Hong-Hee; Yu, Dongmei; Altshuler, David M; Ardissino, Diego; Boehnke, Michael; Danesh, John; Donnelly, Stacey; Elosua, Roberto; Florez, Jose C; Gabriel, Stacey B; Getz, Gad; Glatt, Stephen J; Hultman, Christina M; Kathiresan, Sekar; Laakso, Markku; McCarroll, Steven; McCarthy, Mark I; McGovern, Dermot; McPherson, Ruth; Neale, Benjamin M; Palotie, Aarno; Purcell, Shaun M; Saleheen, Danish; Scharf, Jeremiah M; Sklar, Pamela; Sullivan, Patrick F; Tuomilehto, Jaakko; Tsuang, Ming T; Watkins, Hugh C; Wilson, James G; Daly, Mark J; MacArthur, Daniel G

    2016-08-18

    Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

  18. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  19. Climate change, water rights, and water supply: The case of irrigated agriculture in Idaho

    NASA Astrophysics Data System (ADS)

    Xu, Wenchao; Lowe, Scott E.; Adams, Richard M.

    2014-12-01

    We conduct a hedonic analysis to estimate the response of agricultural land use to water supply information under the Prior Appropriation Doctrine by using Idaho as a case study. Our analysis includes long-term climate (weather) trends and water supply conditions as well as seasonal water supply forecasts. A farm-level panel data set, which accounts for the priority effects of water rights and controls for diversified crop mixes and rotation practices, is used. Our results indicate that farmers respond to the long-term surface and ground water conditions as well as to the seasonal water supply variations. Climate change-induced variations in climate and water supply conditions could lead to substantial damages to irrigated agriculture. We project substantial losses (up to 32%) of the average crop revenue for major agricultural areas under future climate scenarios in Idaho. Finally, farmers demonstrate significantly varied responses given their water rights priorities, which imply that the distributional impact of climate change is sensitive to institutions such as the Prior Appropriation Doctrine.

  20. High-throughput multiplex HLA-typing by ligase detection reaction (LDR) and universal array (UA) approach.

    PubMed

    Consolandi, Clarissa

    2009-01-01

    One major goal of genetic research is to understand the role of genetic variation in living systems. In humans, by far the most common type of such variation involves differences in single DNA nucleotides, and is thus termed single nucleotide polymorphism (SNP). The need for improvement in throughput and reliability of traditional techniques makes it necessary to develop new technologies. Thus the past few years have witnessed an extraordinary surge of interest in DNA microarray technology. This new technology offers the first great hope for providing a systematic way to explore the genome. It permits a very rapid analysis of thousands genes for the purpose of gene discovery, sequencing, mapping, expression, and polymorphism detection. We generated a series of analytical tools to address the manufacturing, detection and data analysis components of a microarray experiment. In particular, we set up a universal array approach in combination with a PCR-LDR (polymerase chain reaction-ligation detection reaction) strategy for allele identification in the HLA gene.

  1. Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin

    2015-10-25

    I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less

  2. Altitude-temporal behaviour of atmospheric ozone, temperature and wind velocity observed at Svalbard

    NASA Astrophysics Data System (ADS)

    Petkov, Boyan H.; Vitale, Vito; Svendby, Tove M.; Hansen, Georg H.; Sobolewski, Piotr S.; Láska, Kamil; Elster, Josef; Pavlova, Kseniya; Viola, Angelo; Mazzola, Mauro; Lupi, Angelo; Solomatnikova, Anna

    2018-07-01

    The vertical features of the variations in the atmospheric ozone density, temperature and wind velocity observed at Ny-Ålesund, Svalbard were studied by applying the principal component analysis to the ozonesounding data collected during the 1992-2016 period. Two data sets corresponding to intra-seasonal (IS) variations, which are composed by harmonics with lower than 1 year periods and inter-annual (IA) variations, characterised by larger periods, were extracted and analysed separately. The IS variations in all the three parameters were found to be composed mainly by harmonics typical for the Madden-Julian Oscillation (from 30- to 60-day periods) and, while the first four principal components (PCs) associated with the temperature and wind contributed about 90% to the IS variations, the ozone IS oscillations appeared to be a higher dimensional object for which the first 15 PCs presented almost the same extent of contribution. The IA variations in the three parameters were consisted of harmonics that correspond to widely registered over the globe Quasi-Biennial, El Niño-Southern, North Atlantic and Arctic Oscillations respectively, and the IA variations turned out to be negligible below the tropopause that characterises the Svalbard troposphere as comparatively closed system with respect to the long-period global variations. The behaviour of the first and second PCs associated with IS ozone variations in the time of particular events, like the strong ozone depletion over Arctic in the spring 2011 and solar eclipses was discussed and the changes in the amplitude-frequency features of these PCs were assumed as signs of the atmosphere response to the considered phenomena.

  3. Costing Alternative Birth Settings for Women at Low Risk of Complications: A Systematic Review

    PubMed Central

    Scarf, Vanessa; Catling, Christine; Viney, Rosalie; Homer, Caroline

    2016-01-01

    Background There is demand from women for alternatives to giving birth in a standard hospital setting however access to these services is limited. This systematic review examines the literature relating to the economic evaluations of birth setting for women at low risk of complications. Methods Searches of the literature to identify economic evaluations of different birth settings of the following electronic databases: MEDLINE, CINAHL, EconLit, Business Source Complete and Maternity and Infant care. Relevant English language publications were chosen using keywords and MeSH terms between 1995 and 2015. Inclusion criteria included studies focussing on the comparison of birth setting. Data were extracted with respect to study design, perspective, PICO principles, and resource use and cost data. Results Eleven studies were included from Australia, Canada, the Netherlands, Norway, the USA, and the UK. Four studies compared costs between homebirth and the hospital setting and the remaining seven focussed on the cost of birth centre care and the hospital setting. Six studies used a cost-effectiveness analysis and the remaining five studies used cost analysis and cost comparison methods. Eight of the 11 studies found a cost saving in the alternative settings. Two found no difference in the cost of the alternative settings and one found an increase in birth centre care. Conclusions There are few studies that compare the cost of birth setting. The variation in the results may be attributable to the cost data collection processes, difference in health systems and differences in which costs were included. A better understanding of the cost of birth setting is needed to inform policy makers and service providers. PMID:26891444

  4. A phylogenetic framework facilitates Y-STR variant discovery and classification via massively parallel sequencing.

    PubMed

    Huszar, Tunde I; Jobling, Mark A; Wetton, Jon H

    2018-04-12

    Short tandem repeats on the male-specific region of the Y chromosome (Y-STRs) are permanently linked as haplotypes, and therefore Y-STR sequence diversity can be considered within the robust framework of a phylogeny of haplogroups defined by single nucleotide polymorphisms (SNPs). Here we use massively parallel sequencing (MPS) to analyse the 23 Y-STRs in Promega's prototype PowerSeq™ Auto/Mito/Y System kit (containing the markers of the PowerPlex® Y23 [PPY23] System) in a set of 100 diverse Y chromosomes whose phylogenetic relationships are known from previous megabase-scale resequencing. Including allele duplications and alleles resulting from likely somatic mutation, we characterised 2311 alleles, demonstrating 99.83% concordance with capillary electrophoresis (CE) data on the same sample set. The set contains 267 distinct sequence-based alleles (an increase of 58% compared to the 169 detectable by CE), including 60 novel Y-STR variants phased with their flanking sequences which have not been reported previously to our knowledge. Variation includes 46 distinct alleles containing non-reference variants of SNPs/indels in both repeat and flanking regions, and 145 distinct alleles containing repeat pattern variants (RPV). For DYS385a,b, DYS481 and DYS390 we observed repeat count variation in short flanking segments previously considered invariable, and suggest new MPS-based structural designations based on these. We considered the observed variation in the context of the Y phylogeny: several specific haplogroup associations were observed for SNPs and indels, reflecting the low mutation rates of such variant types; however, RPVs showed less phylogenetic coherence and more recurrence, reflecting their relatively high mutation rates. In conclusion, our study reveals considerable additional diversity at the Y-STRs of the PPY23 set via MPS analysis, demonstrates high concordance with CE data, facilitates nomenclature standardisation, and places Y-STR sequence variants in their phylogenetic context. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Verifying Diurnal Variations of Global Precipitation in Three New Global Reanalyses

    NASA Astrophysics Data System (ADS)

    Wu, S.; Xie, P.; Sun, F.; Joyce, R.

    2013-12-01

    Diurnal variations of global precipitation and their representation in three sets of new generation global reanalyses are examined using the reprocessed and bias corrected CMORPH satellite precipitation estimates. The CMORPH satellite precipitation estimates are produced on an 8km by 8km grid over the globe (60oS-60oN) and in a 30-min interval covering a 15-year period from 1998 to the present through combining information from IR and PMW observations from all available satellites. Bias correction is performed for the raw CMORPH precipitation estimates through calibration against an gauge-based analysis over land and against the pentad GPCP analysis over ocean. The reanalyses examined here include the NCEP CFS reanalysis (CFSR), NASA/GSFC MERRA, and ECMWF Interim. The bias-corrected CMORPH is integrated from its original resolution to the reanalyses grid systems to facilitate the verification. First, quantitative agreements between the reanalysis precipitation fields and the CMORPH satellite observation are examined over the global domain. Precipitation structures associated with the large-scale topography are well reproduced when compared against the observation. Evolution of precipitation patterns with the development of transient weather systems are captured by the CFSR and two other reanalyses. The reanalyses tend to generate precipitation fields with wider raining areas and reduced intensity for heavy rainfall cases compared the observations over both land and ocean. Seasonal migration of global precipitation depicted in the 15-year CMORPH satellite observations is very well captured by the three sets of new reanalyses, although magnitude of precipitation is larger, especially in the CFSR, compared to that in the observations. In general, the three sets of new reanalyses exhibit substantial improvements in their performance to represent global precipitation distributions and variations. In particular, the new reanalyses produced precipitation variations of fine time/space scales collated in the observations. The diurnal cycle of the precipitation is reasonably well reproduced by the reanalyses over many global oceanic and land areas. Diurnal amplitude of the reanalyses precipitation, defined as the standard deviation of the 24 hourly mean values, is smaller than that in the observations over most of the oceanic regions, attributable largely to the continuous weak precipitation throughout the diurnal cycle in all of the three reanalyses. Over ocean, the pattern of diurnal variations of precipitation in the reanalyses is quite similar to that in the observations, with the timing of maximum precipitation shifted by1-3 hours. Over land especially over Africa, the reanalyses tend to produce maximum precipitation around noon, much earlier than that in the observations. Particularly noticeable is the diurnal cycle of warm season precipitation over CONUS in association with the eastward propagation of meso-scale systems distinct in the observations. None of the three new reanalyses are capable of detecting this pattern of diurnal variations. A comprehensive description and diagnostic discussions will be given at the AGU meeting.

  6. SU-E-T-427: Cell Surviving Fractions Derived From Tumor-Volume Variation During Radiotherapy for Non-Small Cell Lung Cancer: Comparison with Predictive Assays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A; Schwartz, J; Mayr, N

    2014-06-01

    Purpose: To show that a distribution of cell surviving fractions S{sub 2} in a heterogeneous group of patients can be derived from tumor-volume variation curves during radiotherapy for non-small cell lung cancer. Methods: Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancer with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage (MV) computed tomography (CT). Statistical distributions of cell surviving fractions S{sup 2} and cell clearance half-lives of lethally damaged cells T1/2 have been reconstructed in eachmore » patient group by using a version of the two-level cell population tumor response model and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Results: Non-small cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractions S{sub 2} for non-small cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S{sup 2} reconstructed from tumor volume variation agree with the PDF measured in vitro. Comparison of the reconstructed cell surviving fractions with patient survival data shows that the patient survival time decreases as the cell surviving fraction increases. Conclusion: The data obtained in this work suggests that the cell surviving fractions S{sub 2} can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.« less

  7. Geographic Variation of Overweight and Obesity among Women in Nigeria: A Case for Nutritional Transition in Sub-Saharan Africa

    PubMed Central

    Kandala, Ngianga-Bakwin; Stranges, Saverio

    2014-01-01

    Background Nutritional research in sub-Saharan Africa has primarily focused on under-nutrition. However, there is evidence of an ongoing nutritional transition in these settings. This study aimed to examine the geographic variation of overweight and obesity prevalence at the state-level among women in Nigeria, while accounting for individual-level risk factors. Methods The analysis was based on the 2008 Nigerian Demographic and Health Survey (NDHS), including 27,967 women aged 15–49 years. Individual data were collected on socio-demographics, but were aggregated to the country's states. We used a Bayesian geo-additive mixed model to map the geographic distribution of overweight and obesity at the state-level, accounting for individual-level risk factors. Results The overall prevalence of combined overweight and obesity (body mass index ≥25) was 20.9%. In multivariate Bayesian geo-additive models, higher education [odds ratio (OR) & 95% Credible Region (CR): 1.68 (1.38, 2.00)], higher wealth index [3.45 (2.98, 4.05)], living in urban settings [1.24 (1.14, 1.36)] and increasing age were all significantly associated with a higher prevalence of overweight/obesity. There was also a striking variation in overweight/obesity prevalence across ethnic groups and state of residence, the highest being in Cross River State, in south-eastern Nigeria [2.32 (1.62, 3.40)], the lowest in Osun State in south-western Nigeria [0.48 (0.36, 0.61)]. Conclusions This study suggests distinct geographic patterns in the combined prevalence of overweight and obesity among Nigerian women, as well as the role of demographic, socio-economic and environmental factors in the ongoing nutritional transition in these settings. PMID:24979753

  8. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  9. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  10. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  11. Seasonal assessment and apportionment of surface water pollution using multivariate statistical methods: Sinos River, southern Brazil.

    PubMed

    Alves, Darlan Daniel; Riegel, Roberta Plangg; de Quevedo, Daniela Müller; Osório, Daniela Montanari Migliavacca; da Costa, Gustavo Marques; do Nascimento, Carlos Augusto; Telöken, Franko

    2018-06-08

    Assessment of surface water quality is an issue of currently high importance, especially in polluted rivers which provide water for treatment and distribution as drinking water, as is the case of the Sinos River, southern Brazil. Multivariate statistical techniques allow a better understanding of the seasonal variations in water quality, as well as the source identification and source apportionment of water pollution. In this study, the multivariate statistical techniques of cluster analysis (CA), principal component analysis (PCA), and positive matrix factorization (PMF) were used, along with the Kruskal-Wallis test and Spearman's correlation analysis in order to interpret a water quality data set resulting from a monitoring program conducted over a period of almost two years (May 2013 to April 2015). The water samples were collected from the raw water inlet of the municipal water treatment plant (WTP) operated by the Water and Sewage Services of Novo Hamburgo (COMUSA). CA allowed the data to be grouped into three periods (autumn and summer (AUT-SUM); winter (WIN); spring (SPR)). Through the PCA, it was possible to identify that the most important parameters in contribution to water quality variations are total coliforms (TCOLI) in SUM-AUT, water level (WL), water temperature (WT), and electrical conductivity (EC) in WIN and color (COLOR) and turbidity (TURB) in SPR. PMF was applied to the complete data set and enabled the source apportionment water pollution through three factors, which are related to anthropogenic sources, such as the discharge of domestic sewage (mostly represented by Escherichia coli (ECOLI)), industrial wastewaters, and agriculture runoff. The results provided by this study demonstrate the contribution provided by the use of integrated statistical techniques in the interpretation and understanding of large data sets of water quality, showing also that this approach can be used as an efficient methodology to optimize indicators for water quality assessment.

  12. Exploring gravitational lensing model variations in the Frontier Fields galaxy clusters

    NASA Astrophysics Data System (ADS)

    Harris James, Nicholas John; Raney, Catie; Brennan, Sean; Keeton, Charles

    2018-01-01

    Multiple groups have been working on modeling the mass distributions of the six lensing galaxy clusters in the Hubble Space Telescope Frontier Fields data set. The magnification maps produced from these mass models will be important for the future study of the lensed background galaxies, but there exists significant variation in the different groups’ models and magnification maps. We explore the use of two-dimensional histograms as a tool for visualizing these magnification map variations. Using a number of simple, one- or two-halo singular isothermal sphere models, we explore the features that are produced in 2D histogram model comparisons when parameters such as halo mass, ellipticity, and location are allowed to vary. Our analysis demonstrates the potential of 2D histograms as a means of observing the full range of differences between the Frontier Fields groups’ models.This work has been supported by funding from National Science Foundation grants PHY-1560077 and AST-1211385, and from the Space Telescope Science Institute.

  13. Seasonal variation of upper mesospheric temperatures from the OH and O2 nightglow over King Sejong Station, Antarctica

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Kim, Y. H.; Moon, B.-K.; Chung, J.-K.; Won, Y.-I.

    A spectral airglow temperature imager SATI was operated at King Sejong Station 62 22 r S 301 2 r E Korea Antarctic Research Station during a period of 2002 - 2005 Rotational temperatures from the OH 6-2 and O 2 0-1 band airglow were obtained for more than 600 nights during the 4 year operation Both the OH and O 2 temperatures show similar seasonal variations which change significantly year by year A maximum temperature occurred early May in 2003 and 2004 whereas two maxima appeared in April and August in 2002 The 2005 data show only a broad and weak maximum during months of April and May The data also show oscillations with periods of hours that seem to relate to tides and gravity waves and fluctuations with timescales of days that could be due to planetary waves Detailed analysis will be performed to the data set to identify major atmospheric oscillations or variation over hours days and seasons

  14. Characterization of Dermanyssus gallinae (Acarina: Dermanissydae) by sequence analysis of the ribosomal internal transcribed spacer regions.

    PubMed

    Potenza, L; Cafiero, M A; Camarda, A; La Salandra, G; Cucchiarini, L; Dachà, M

    2009-10-01

    In the present work mites previously identified as Dermanyssus gallinae De Geer (Acari, Mesostigmata) using morphological keys were investigated by molecular tools. The complete internal transcribed spacer 1 (ITS1), 5.8S ribosomal DNA, and ITS2 region of the ribosomal DNA from mites were amplified and sequenced to examine the level of sequence variations and to explore the feasibility of using this region in the identification of this mite. Conserved primers located at the 3'end of 18S and at the 5'start of 28S rRNA genes were used first, and amplified fragments were sequenced. Sequence analyses showed no variation in 5.8S and ITS2 region while slight intraspecific variations involving substitutions as well as deletions concentrated in the ITS1 region. Based on the sequence analyses a nested PCR of the ITS2 region followed by RFLP analyses has been set up in the attempt to provide a rapid molecular diagnostic tool of D. gallinae.

  15. Plasma protein absolute quantification by nano-LC Q-TOF UDMSE for clinical biomarker verification

    PubMed Central

    ILIES, MARIA; IUGA, CRISTINA ADELA; LOGHIN, FELICIA; DHOPLE, VISHNU MUKUND; HAMMER, ELKE

    2017-01-01

    Background and aims Proteome-based biomarker studies are targeting proteins that could serve as diagnostic, prognosis, and prediction molecules. In the clinical routine, immunoassays are currently used for the absolute quantification of such biomarkers, with the major limitation that only one molecule can be targeted per assay. The aim of our study was to test a mass spectrometry based absolute quantification method for the verification of plasma protein sets which might serve as reliable biomarker panels for the clinical practice. Methods Six EDTA plasma samples were analyzed after tryptic digestion using a high throughput data independent acquisition nano-LC Q-TOF UDMSE proteomics approach. Synthetic Escherichia coli standard peptides were spiked in each sample for the absolute quantification. Data analysis was performed using ProgenesisQI v2.0 software (Waters Corporation). Results Our method ensured absolute quantification of 242 non redundant plasma proteins in a single run analysis. The dynamic range covered was 105. 86% were represented by classical plasma proteins. The overall median coefficient of variation was 0.36, while a set of 63 proteins was found to be highly stable. Absolute protein concentrations strongly correlated with values reviewed in the literature. Conclusions Nano-LC Q-TOF UDMSE proteomic analysis can be used for a simple and rapid determination of absolute amounts of plasma proteins. A large number of plasma proteins could be analyzed, while a wide dynamic range was covered with low coefficient of variation at protein level. The method proved to be a reliable tool for the quantification of protein panel for biomarker verification in the clinical practice. PMID:29151793

  16. Linked independent component analysis for multimodal data fusion.

    PubMed

    Groves, Adrian R; Beckmann, Christian F; Smith, Steve M; Woolrich, Mark W

    2011-02-01

    In recent years, neuroimaging studies have increasingly been acquiring multiple modalities of data and searching for task- or disease-related changes in each modality separately. A major challenge in analysis is to find systematic approaches for fusing these differing data types together to automatically find patterns of related changes across multiple modalities, when they exist. Independent Component Analysis (ICA) is a popular unsupervised learning method that can be used to find the modes of variation in neuroimaging data across a group of subjects. When multimodal data is acquired for the subjects, ICA is typically performed separately on each modality, leading to incompatible decompositions across modalities. Using a modular Bayesian framework, we develop a novel "Linked ICA" model for simultaneously modelling and discovering common features across multiple modalities, which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses and intensity distributions. Furthermore, this general model can be configured to allow tensor ICA or spatially-concatenated ICA decompositions, or a combination of both at the same time. Linked ICA automatically determines the optimal weighting of each modality, and also can detect single-modality structured components when present. This is a fully probabilistic approach, implemented using Variational Bayes. We evaluate the method on simulated multimodal data sets, as well as on a real data set of Alzheimer's patients and age-matched controls that combines two very different types of structural MRI data: morphological data (grey matter density) and diffusion data (fractional anisotropy, mean diffusivity, and tensor mode). Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Ozone reference models for the middle atmosphere (new CIRA)

    NASA Technical Reports Server (NTRS)

    Keating, G. M.; Pitts, M. C.; Young, D. F.

    1989-01-01

    Models of ozone vertical structure were generated that were based on multiple data sets from satellites. The very good absolute accuracy of the individual data sets allowed the data to be directly combined to generate these models. The data used for generation of these models are from some of the most recent satellite measurements over the period 1978 to 1983. A discussion is provided of validation and error analyses of these data sets. Also, inconsistencies in data sets brought about by temporal variations or other factors are indicated. The models cover the pressure range from from 20 to 0.003 mb (25 to 90 km). The models for pressures less than 0.5 mb represent only the day side and are only provisional since there was limited longitudinal coverage at these levels. The models start near 25 km in accord with previous COSPAR international reference atmosphere (CIRA) models. Models are also provided of ozone mixing ratio as a function of height. The monthly standard deviation and interannual variations relative to zonal means are also provided. In addition to the models of monthly latitudinal variations in vertical structure based on satellite measurements, monthly models of total column ozone and its characteristic variability as a function of latitude based on four years of Nimbus 7 measurements, models of the relationship between vertical structure and total column ozone, and a midlatitude annual mean model are incorporated in this set of ozone reference atmospheres. Various systematic variations are discussed including the annual, semiannual, and quasibiennial oscillations, and diurnal, longitudinal, and response to solar activity variations.

  18. The time variability of Jupiter's synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Bolton, Scott Jay

    1991-02-01

    The time variability of the Jovian synchrotron emission is investigated by analyzing radio observations of Jupiter at decimetric wavelengths. The observations are composed from two distinct sets of measurements addressing both short term (days to weeks) and long term (months to years) variability. The study of long term variations utilizes a set of measurements made several times each month with the NASA Deep Space Network (DNS) antennas operating at 2295 MHz (13.1 cm). The DSN data set, covering 1971 through 1985, is compared with a set of measurements of the solar wind from a number of Earth orbiting spacecraft. The analysis indicates a maximum correlation between the synchrotron emission and the solar wind ram pressure with a two year time lag. Physical mechanisms affecting the synchrotron emission are discussed with an emphasis on radial diffusion. Calculations are performed that suggest the correlation is consistent with inward adiabatic diffusion of solar wind particles driven by Brice's model of ionospheric neutral wind convection (Brice 1972). The implication is that the solar wind could be a source of particles of Jupiter's radiation belts. The investigation of short term variability focuses on a three year Jupiter observing program using the University of California's Hat Creek radio telescope operating at 1400 MHz (21 cm). Measurements are made every two days during the months surrounding opposition. Results from the three year program suggest short term variability near the 10-20 percent level but should be considered inconclusive due to scheduling and observational limitations. A discussion of magneto-spheric processes on short term timescales identifies wave-particle interactions as a candidate source. Further analysis finds that the short term variations could be related to whistler mode wave-particles interactions in the radiation belts associated with atmospheric lightning on Jupiter. However, theoretical calculations on wave particle interactions imply thought if whistler mode waves are to interact with the synchrotron emitting electrons.

  19. Dissociable contribution of prefrontal and striatal dopaminergic genes to learning in economic games

    PubMed Central

    Set, Eric; Saez, Ignacio; Zhu, Lusha; Houser, Daniel E.; Myung, Noah; Zhong, Songfa; Ebstein, Richard P.; Chew, Soo Hong; Hsu, Ming

    2014-01-01

    Game theory describes strategic interactions where success of players’ actions depends on those of coplayers. In humans, substantial progress has been made at the neural level in characterizing the dopaminergic and frontostriatal mechanisms mediating such behavior. Here we combined computational modeling of strategic learning with a pathway approach to characterize association of strategic behavior with variations in the dopamine pathway. Specifically, using gene-set analysis, we systematically examined contribution of different dopamine genes to variation in a multistrategy competitive game captured by (i) the degree players anticipate and respond to actions of others (belief learning) and (ii) the speed with which such adaptations take place (learning rate). We found that variation in genes that primarily regulate prefrontal dopamine clearance—catechol-O-methyl transferase (COMT) and two isoforms of monoamine oxidase—modulated degree of belief learning across individuals. In contrast, we did not find significant association for other genes in the dopamine pathway. Furthermore, variation in genes that primarily regulate striatal dopamine function—dopamine transporter and D2 receptors—was significantly associated with the learning rate. We found that this was also the case with COMT, but not for other dopaminergic genes. Together, these findings highlight dissociable roles of frontostriatal systems in strategic learning and support the notion that genetic variation, organized along specific pathways, forms an important source of variation in complex phenotypes such as strategic behavior. PMID:24979760

  20. Dissociable contribution of prefrontal and striatal dopaminergic genes to learning in economic games.

    PubMed

    Set, Eric; Saez, Ignacio; Zhu, Lusha; Houser, Daniel E; Myung, Noah; Zhong, Songfa; Ebstein, Richard P; Chew, Soo Hong; Hsu, Ming

    2014-07-01

    Game theory describes strategic interactions where success of players' actions depends on those of coplayers. In humans, substantial progress has been made at the neural level in characterizing the dopaminergic and frontostriatal mechanisms mediating such behavior. Here we combined computational modeling of strategic learning with a pathway approach to characterize association of strategic behavior with variations in the dopamine pathway. Specifically, using gene-set analysis, we systematically examined contribution of different dopamine genes to variation in a multistrategy competitive game captured by (i) the degree players anticipate and respond to actions of others (belief learning) and (ii) the speed with which such adaptations take place (learning rate). We found that variation in genes that primarily regulate prefrontal dopamine clearance--catechol-O-methyl transferase (COMT) and two isoforms of monoamine oxidase--modulated degree of belief learning across individuals. In contrast, we did not find significant association for other genes in the dopamine pathway. Furthermore, variation in genes that primarily regulate striatal dopamine function--dopamine transporter and D2 receptors--was significantly associated with the learning rate. We found that this was also the case with COMT, but not for other dopaminergic genes. Together, these findings highlight dissociable roles of frontostriatal systems in strategic learning and support the notion that genetic variation, organized along specific pathways, forms an important source of variation in complex phenotypes such as strategic behavior.

  1. A Perfect Match Genomic Landscape Provides a Unified Framework for the Precise Detection of Variation in Natural and Synthetic Haploid Genomes.

    PubMed

    Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo

    2018-04-01

    We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. Copyright © 2018 by the Genetics Society of America.

  2. WINDII atmospheric wave airglow imaging

    NASA Technical Reports Server (NTRS)

    Armstrong, W. T.; Hoppe, U.-P.; Solheim, B. H.; Shepherd, G. G.

    1996-01-01

    Preliminary WINDII nighttime airglow wave-imaging data in the UARS rolldown attitude has been analyzed with the goal to survey gravity waves near the upper boundary of the middle atmosphere. Wave analysis is performed on O[sub 2](0,0) emissions from a selected 1[sup 0] x 1[sup 0] oblique view of the airglow layer at approximately 95 km altitude, which has no direct earth background and only an atmospheric background which is optically thick for the 0[sub 2](0,0) emission. From a small data set, orbital imaging of atmospheric wave structures is demonstrated, with indication of large variations in wave activity across land and sea. Comparison ground-based imagery is discussed with respect to similarity of wave variations across land/sea boundaries and future orbital mosaic image construction.

  3. Translation of Nutritional Genomics into Nutrition Practice: The Next Step.

    PubMed

    Murgia, Chiara; Adamski, Melissa M

    2017-04-06

    Genetics is an important piece of every individual health puzzle. The completion of the Human Genome Project sequence has deeply changed the research of life sciences including nutrition. The analysis of the genome is already part of clinical care in oncology, pharmacology, infectious disease and, rare and undiagnosed diseases. The implications of genetic variations in shaping individual nutritional requirements have been recognised and conclusively proven, yet routine use of genetic information in nutrition and dietetics practice is still far from being implemented. This article sets out the path that needs to be taken to build a framework to translate gene-nutrient interaction studies into best-practice guidelines, providing tools that health professionals can use to understand whether genetic variation affects nutritional requirements in their daily clinical practice.

  4. Translation of Nutritional Genomics into Nutrition Practice: The Next Step

    PubMed Central

    Murgia, Chiara; Adamski, Melissa M.

    2017-01-01

    Genetics is an important piece of every individual health puzzle. The completion of the Human Genome Project sequence has deeply changed the research of life sciences including nutrition. The analysis of the genome is already part of clinical care in oncology, pharmacology, infectious disease and, rare and undiagnosed diseases. The implications of genetic variations in shaping individual nutritional requirements have been recognised and conclusively proven, yet routine use of genetic information in nutrition and dietetics practice is still far from being implemented. This article sets out the path that needs to be taken to build a framework to translate gene–nutrient interaction studies into best-practice guidelines, providing tools that health professionals can use to understand whether genetic variation affects nutritional requirements in their daily clinical practice. PMID:28383492

  5. Spatiotemporal analysis of Quaternary normal faults in the Northern Rocky Mountains, USA

    NASA Astrophysics Data System (ADS)

    Davarpanah, A.; Babaie, H. A.; Reed, P.

    2010-12-01

    The mid-Tertiary Basin-and-Range extensional tectonic event developed most of the normal faults that bound the ranges in the northern Rocky Mountains within Montana, Wyoming, and Idaho. The interaction of the thermally induced stress field of the Yellowstone hot spot with the existing Basin-and-Range fault blocks, during the last 15 my, has produced a new, spatially and temporally variable system of normal faults in these areas. The orientation and spatial distribution of the trace of these hot-spot induced normal faults, relative to earlier Basin-and-Range faults, have significant implications for the effect of the temporally varying and spatially propagating thermal dome on the growth of new hot spot related normal faults and reactivation of existing Basin-and-Range faults. Digitally enhanced LANDSAT 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 4 and 5 Thematic Mapper (TM) bands, with spatial resolution of 30 m, combined with analytical GIS and geological techniques helped in determining and analyzing the lineaments and traces of the Quaternary, thermally-induced normal faults in the study area. Applying the color composite (CC) image enhancement technique, the combination of bands 3, 2 and 1 of the ETM+ and TM images was chosen as the best statistical choice to create a color composite for lineament identification. The spatiotemporal analysis of the Quaternary normal faults produces significant information on the structural style, timing, spatial variation, spatial density, and frequency of the faults. The seismic Quaternary normal faults, in the whole study area, are divided, based on their age, into four specific sets, which from oldest to youngest include: Quaternary (>1.6 Ma), middle and late Quaternary (>750 ka), latest Quaternary (>15 ka), and the last 150 years. A density map for the Quaternary faults reveals that most active faults are near the current Yellowstone National Park area (YNP), where most seismically active faults, in the past 1.6 my, are located. The GIS based autocorrelation method, applied to the trace orientation, length, frequency, and spatial distribution for each age-defined fault set, revealed spatial homogeneity for each specific set. The results of the method of Moran`sI and Geary`s C show no spatial autocorrelation among the trend of the fault traces and their location. Our results suggest that while lineaments of similar age define a clustered pattern in each domain, the overall distribution pattern of lineaments with different ages seems to be non-uniform (random). The directional distribution analysis reveals a distinct range of variation for fault traces of different ages (i.e., some displaying ellipsis behavior). Among the Quaternary normal fault sets, the youngest lineament set (i.e., last 150 years) defines the greatest ellipticity (eccentricity) and the least lineaments distribution variation. The frequency rose diagram for the entire Quaternary normal faults, shows four major modes (around 360o, 330o, 300o, and 270o), and two minor modes (around 235 and 205).

  6. Epigenetic Variability in the Genetically Uniform Forest Tree Species Pinus pinea L

    PubMed Central

    Sáez-Laguna, Enrique; Guevara, María-Ángeles; Díaz, Luis-Manuel; Sánchez-Gómez, David; Collada, Carmen; Aranda, Ismael; Cervera, María-Teresa

    2014-01-01

    There is an increasing interest in understanding the role of epigenetic variability in forest species and how it may contribute to their rapid adaptation to changing environments. In this study we have conducted a genome-wide analysis of cytosine methylation pattern in Pinus pinea, a species characterized by very low levels of genetic variation and a remarkable degree of phenotypic plasticity. DNA methylation profiles of different vegetatively propagated trees from representative natural Spanish populations of P. pinea were analyzed with the Methylation Sensitive Amplified Polymorphism (MSAP) technique. A high degree of cytosine methylation was detected (64.36% of all scored DNA fragments). Furthermore, high levels of epigenetic variation were observed among the studied individuals. This high epigenetic variation found in P. pinea contrasted with the lack of genetic variation based on Amplified Fragment Length Polymorphism (AFLP) data. In this manner, variable epigenetic markers clearly discriminate individuals and differentiates two well represented populations while the lack of genetic variation revealed with the AFLP markers fail to differentiate at both, individual or population levels. In addition, the use of different replicated trees allowed identifying common polymorphic methylation sensitive MSAP markers among replicates of a given propagated tree. This set of MSAPs allowed discrimination of the 70% of the analyzed trees. PMID:25084460

  7. Epigenetic variability in the genetically uniform forest tree species Pinus pinea L.

    PubMed

    Sáez-Laguna, Enrique; Guevara, María-Ángeles; Díaz, Luis-Manuel; Sánchez-Gómez, David; Collada, Carmen; Aranda, Ismael; Cervera, María-Teresa

    2014-01-01

    There is an increasing interest in understanding the role of epigenetic variability in forest species and how it may contribute to their rapid adaptation to changing environments. In this study we have conducted a genome-wide analysis of cytosine methylation pattern in Pinus pinea, a species characterized by very low levels of genetic variation and a remarkable degree of phenotypic plasticity. DNA methylation profiles of different vegetatively propagated trees from representative natural Spanish populations of P. pinea were analyzed with the Methylation Sensitive Amplified Polymorphism (MSAP) technique. A high degree of cytosine methylation was detected (64.36% of all scored DNA fragments). Furthermore, high levels of epigenetic variation were observed among the studied individuals. This high epigenetic variation found in P. pinea contrasted with the lack of genetic variation based on Amplified Fragment Length Polymorphism (AFLP) data. In this manner, variable epigenetic markers clearly discriminate individuals and differentiates two well represented populations while the lack of genetic variation revealed with the AFLP markers fail to differentiate at both, individual or population levels. In addition, the use of different replicated trees allowed identifying common polymorphic methylation sensitive MSAP markers among replicates of a given propagated tree. This set of MSAPs allowed discrimination of the 70% of the analyzed trees.

  8. The Nanograv Nine-Year Data Set: Measurement and Analysis of Variations in Dispersion Measures

    NASA Technical Reports Server (NTRS)

    Jones, M. L.; McLaughlin, M. A.; Lam, M. T.; Cordes, J. M.; Levin, L.; Chatterjee, S.; Arzoumanian, Z.; Crowter, K.; Demorest, P. B.; Dolch, T.; hide

    2017-01-01

    We analyze dispersion measure(DM) variations of 37 millisecond pulsars in the nine-year North American Nanohertz Observatory for Gravitational Waves (NANOGrav) data release and constrain the sources of these variations. DM variations can result from a changing distance between Earth and the pulsar, inhomogeneities in the interstellar medium, and solar effects. Variations are significant for nearly all pulsars, with characteristic timescales comparable to or even shorter than the average spacing between observations. Five pulsars have periodic annual variations, 14 pulsars have monotonically increasing or decreasing trends, and 14 pulsars show both effects. Of the four pulsars with linear trends that have line-of-sight velocity measurements, three are consistent with a changing distance and require an overdensity of free electrons local to the pulsar. Several pulsars show correlations between DM excesses and lines of sight that pass close to the Sun. Mapping of the DM variations as a function of the pulsar trajectory can identify localized interstellar medium features and, in one case, an upper limit to the size of the dispersing region of 4 au. Four pulsars show roughly Kolmogorov structure functions (SFs), and another four show SFs less steep than Kolmogorov. One pulsar has too large an uncertainty to allow comparisons. We discuss explanations for apparent departures from a Kolmogorov-like spectrum, and we show that the presence of other trends and localized features or gradients in the interstellar medium is the most likely cause.

  9. Variation in assessment and standard setting practices across UK undergraduate medicine and the need for a benchmark

    PubMed Central

    2015-01-01

    Objectives The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Methods Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Results Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. Conclusions The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students. PMID:26520472

  10. Bottomside sinusoidal irregularities in the equatorial F region. II - Cross-correlation and spectral analysis

    NASA Technical Reports Server (NTRS)

    Cragin, B. L.; Hanson, W. B.; Mcclure, J. P.; Valladares, C. E.

    1985-01-01

    Equatorial bottomside sinusoidal (BSS) irregularities have been studied by applying techniques of cross-correlation and spectral analysis to the Atmosphere Explorer data set. The phase of the cross-correlations of the plasma number density is discussed and the two drift velocity components observed using the retarding potential analyzer and ion drift meter on the satellite are discussed. Morphology is addressed, presenting the geographical distributions of the occurrence of BSS events for the equinoxes and solstices. Physical processes including the ion Larmor flux, interhemispheric plasma flows, and variations in the lower F region Pedersen conductivity are invoked to explain the findings.

  11. Reliability of laser Doppler flowmetry curve reading for measurement of toe and ankle pressures: intra- and inter-observer variation.

    PubMed

    Høyer, C; Paludan, J P D; Pavar, S; Biurrun Manresa, J A; Petersen, L J

    2014-03-01

    To assess the intra- and inter-observer variation in laser Doppler flowmetry curve reading for measurement of toe and ankle pressures. A prospective single blinded diagnostic accuracy study was conducted on 200 patients with known or suspected peripheral arterial disease (PAD), with a total of 760 curve sets produced. The first curve reading for this study was performed by laboratory technologists blinded to clinical clues and previous readings at least 3 months after the primary data sampling. The pressure curves were later reassessed following another period of at least 3 months. Observer agreement in diagnostic classification according to TASC-II criteria was quantified using Cohen's kappa. Reliability was quantified using intra-class correlation coefficients, coefficients of variance, and Bland-Altman analysis. The overall agreement in diagnostic classification (PAD/not PAD) was 173/200 (87%) for intra-observer (κ = .858) and 175/200 (88%) for inter-observer data (κ = .787). Reliability analysis confirmed excellent correlation for both intra- and inter-observer data (ICC all ≥.931). The coefficients of variance ranged from 2.27% to 6.44% for intra-observer and 2.39% to 8.42% for inter-observer data. Subgroup analysis showed lower observer-variation for reading of toe pressures in patients with diabetes and/or chronic kidney disease than patients not diagnosed with these conditions. Bland-Altman plots showed higher variation in toe pressure readings than ankle pressure readings. This study shows substantial intra- and inter-observer agreement in diagnostic classification and reading of absolute pressures when using laboratory technologists as observers. The study emphasises that observer variation for curve reading is an important factor concerning the overall reproducibility of the method. Our data suggest diabetes and chronic kidney disease have an influence on toe pressure reproducibility. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  12. Applying risk adjusted cost-effectiveness (RAC-E) analysis to hospitals: estimating the costs and consequences of variation in clinical practice.

    PubMed

    Karnon, Jonathan; Caffrey, Orla; Pham, Clarabelle; Grieve, Richard; Ben-Tovim, David; Hakendorf, Paul; Crotty, Maria

    2013-06-01

    Cost-effectiveness analysis is well established for pharmaceuticals and medical technologies but not for evaluating variations in clinical practice. This paper describes a novel methodology--risk adjusted cost-effectiveness (RAC-E)--that facilitates the comparative evaluation of applied clinical practice processes. In this application, risk adjustment is undertaken with a multivariate matching algorithm that balances the baseline characteristics of patients attending different settings (e.g., hospitals). Linked, routinely collected data are used to analyse patient-level costs and outcomes over a 2-year period, as well as to extrapolate costs and survival over patient lifetimes. The study reports the relative cost-effectiveness of alternative forms of clinical practice, including a full representation of the statistical uncertainty around the mean estimates. The methodology is illustrated by a case study that evaluates the relative cost-effectiveness of services for patients presenting with acute chest pain across the four main public hospitals in South Australia. The evaluation finds that services provided at two hospitals were dominated, and of the remaining services, the more effective hospital gained life years at a low mean additional cost and had an 80% probability of being the most cost-effective hospital at realistic cost-effectiveness thresholds. Potential determinants of the estimated variation in costs and effects were identified, although more detailed analyses to identify specific areas of variation in clinical practice are required to inform improvements at the less cost-effective institutions. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Diversity in global gene expression and morphology across a watercress (Nasturtium officinale R. Br.) germplasm collection: first steps to breeding

    PubMed Central

    Payne, Adrienne C.; Clarkson, Graham J.J.; Rothwell, Steve; Taylor, Gail

    2015-01-01

    Watercress (Nasturtium officinale R. Br.) is a nutrient intense, leafy crop that is consumed raw or in soups across the globe, but for which, currently no genomic resources or breeding programme exists. Promising morphological, biochemical and functional genomic variation was identified for the first time in a newly established watercress germplasm collection, consisting of 48 watercress accessions sourced from contrasting global locations. Stem length, stem diameter and anti-oxidant (AO) potential varied across the accessions. This variation was used to identify three extreme contrasting accessions for further analysis. Variation in global gene expression was investigated using an Affymetrix Arabidopsis ATH1 microarray gene chip, using the commercial control (C), an accession selected for dwarf phenotype with a high AO potential (dwarfAO, called ‘Boldrewood’) and one with high AO potential alone. A set of transcripts significantly differentially expressed between these three accessions, were identified, including transcripts involved in the regulation of growth and development and those involved in secondary metabolism. In particular, when differential gene expression was compared between C and dwarfAO, the dwarfAO was characterised by increased expression of genes encoding glucosinolates, which are known precursors of phenethyl isothiocyanate, linked to the anti-carcinogenic effects well-documented in watercress. This study provides the first analysis of natural variation across the watercress genome and has identified important underpinning information for future breeding for enhanced anti-carcinogenic properties and morphology traits in this nutrient-intense crop. PMID:26504575

  14. A clustering approach to segmenting users of internet-based risk calculators.

    PubMed

    Harle, C A; Downs, J S; Padman, R

    2011-01-01

    Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.

  15. Highly efficient perturbative + variational strategy based on orthogonal valence bond theory for the evaluation of magnetic coupling constants. Application to the trinuclear Cu(ii) site of multicopper oxidases.

    PubMed

    Tenti, Lorenzo; Maynau, Daniel; Angeli, Celestino; Calzado, Carmen J

    2016-07-21

    A new strategy based on orthogonal valence-bond analysis of the wave function combined with intermediate Hamiltonian theory has been applied to the evaluation of the magnetic coupling constants in two AF systems. This approach provides both a quantitative estimate of the J value and a detailed analysis of the main physical mechanisms controlling the coupling, using a combined perturbative + variational scheme. The procedure requires a selection of the dominant excitations to be treated variationally. Two methods have been employed: a brute-force selection, using a logic similar to that of the CIPSI approach, or entanglement measures, which identify the most interacting orbitals in the system. Once a reduced set of excitations (about 300 determinants) is established, the interaction matrix is dressed at the second-order of perturbation by the remaining excitations of the CI space. The diagonalization of the dressed matrix provides J values in good agreement with experimental ones, at a very low-cost. This approach demonstrates the key role of d → d* excitations in the quantitative description of the magnetic coupling, as well as the importance of using an extended active space, including the bridging ligand orbitals, for the binuclear model of the intermediates of multicopper oxidases. The method is a promising tool for dealing with complex systems containing several active centers, as an alternative to both pure variational and DFT approaches.

  16. Industrial entrepreneurial network: Structural and functional analysis

    NASA Astrophysics Data System (ADS)

    Medvedeva, M. A.; Davletbaev, R. H.; Berg, D. B.; Nazarova, J. J.; Parusheva, S. S.

    2016-12-01

    Structure and functioning of two model industrial entrepreneurial networks are investigated in the present paper. One of these networks is forming when implementing an integrated project and consists of eight agents, which interact with each other and external environment. The other one is obtained from the municipal economy and is based on the set of the 12 real business entities. Analysis of the networks is carried out on the basis of the matrix of mutual payments aggregated over the certain time period. The matrix is created by the methods of experimental economics. Social Network Analysis (SNA) methods and instruments were used in the present research. The set of basic structural characteristics was investigated: set of quantitative parameters such as density, diameter, clustering coefficient, different kinds of centrality, and etc. They were compared with the random Bernoulli graphs of the corresponding size and density. Discovered variations of random and entrepreneurial networks structure are explained by the peculiarities of agents functioning in production network. Separately, were identified the closed exchange circuits (cyclically closed contours of graph) forming an autopoietic (self-replicating) network pattern. The purpose of the functional analysis was to identify the contribution of the autopoietic network pattern in its gross product. It was found that the magnitude of this contribution is more than 20%. Such value allows using of the complementary currency in order to stimulate economic activity of network agents.

  17. Evaluation of the Modern Era Retrospective-Analysis for Research and Applications (MERRA) Global Water and Energy Budgets

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, F. R.; Chen, J.

    2010-01-01

    The Modern Era Retrospective-analysis for Research and Applications (MERRA) reanalyses has completed 27 years of data) soon to be caught up to present. Here) we present an evaluation of those years currently available) including comparisons with the existing long reanalyses (ERA40) JRA25 and NCEP I and II) as well as with global data sets for the water and energy cycle. Time series shows that the MERRA budgets can change with some of the variations in observing systems, but that the magnitude of energy imbalance in the system is improved with more observations. We will present all terms of the budgets in MERRA including the time rates of change and analysis increments (tendency due to the analysis of observations).

  18. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  19. Polytopic vector analysis in igneous petrology: Application to lunar petrogenesis

    NASA Technical Reports Server (NTRS)

    Shervais, John W.; Ehrlich, R.

    1993-01-01

    Lunar samples represent a heterogeneous assemblage of rocks with complex inter-relationships that are difficult to decipher using standard petrogenetic approaches. These inter-relationships reflect several distinct petrogenetic trends as well as thermomechanical mixing of distinct components. Additional complications arise from the unequal quality of chemical analyses and from the fact that many samples (e.g., breccia clasts) are too small to be representative of the system from which they derived. Polytopic vector analysis (PVA) is a multi-variate procedure used as a tool for exploratory data analysis. PVA allows the analyst to classify samples and clarifies relationships among heterogenous samples with complex petrogenetic histories. It differs from orthogonal factor analysis in that it uses non-orthogonal multivariate sample vectors to extract sample endmember compositions. The output from a Q-mode (sample based) factor analysis is the initial step in PVA. The Q-mode analysis, using criteria established by Miesch and Klovan and Miesch, is used to determine the number of endmembers in the data system. The second step involves determination of endmembers and mixing proportions with all output expressed in the same geochemical variable as the input. The composition of endmembers is derived by analysis of the variability of the data set. Endmembers need not be present in the data set, nor is it necessary for their composition to be known a priori. A set of any endmembers defines a 'polytope' or classification figure (triangle for a three component system, tetrahedron for a four component system, a 'five-tope' in four dimensions for five component system, et cetera).

  20. Piezoelectric Lead Zirconate Titanate (PZT) Ring Shaped Contour-Mode MEMS Resonators

    NASA Astrophysics Data System (ADS)

    Kasambe, P. V.; Asgaonkar, V. V.; Bangera, A. D.; Lokre, A. S.; Rathod, S. S.; Bhoir, D. V.

    2018-02-01

    Flexibility in setting fundamental frequency of resonator independent of its motional resistance is one of the desired criteria in micro-electromechanical (MEMS) resonator design. It is observed that ring-shaped piezoelectric contour-mode MEMS resonators satisfy this design criterion than in case of rectangular plate MEMS resonators. Also ring-shaped contour-mode piezoelectric MEMS resonator has an advantage that its fundamental frequency is defined by in-plane dimensions, but they show variation of fundamental frequency with different Platinum (Pt) thickness referred as change in ratio of fNEW /fO . This paper presents the effects of variation in geometrical parameters and change in piezoelectric material on the resonant frequencies of Platinum piezoelectric-Aluminium ring-shaped contour-mode MEMS resonators and its electrical parameters. The proposed structure with Lead Zirconate Titanate (PZT) as the piezoelectric material was observed to be a piezoelectric material with minimal change in fundamental resonant frequency due to Platinum thickness variation. This structure was also found to exhibit extremely low motional resistance of 0.03 Ω as compared to the 31-35 Ω range obtained when using AlN as the piezoelectric material. CoventorWare 10 is used for the design, simulation and corresponding analysis of resonators which is Finite Element Method (FEM) analysis and design tool for MEMS devices.

Top