NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The sound due to the large-scale (wavelike) structure in an infinite free turbulent shear flow is examined. Specifically, a computational study of a plane shear layer is presented, which accounts, by way of triple decomposition of the flow field variables, for three distinct component scales of motion (mean, wave, turbulent), and from which the sound - due to the large-scale wavelike structure - in the acoustic field can be isolated by a simple phase average. The computational approach has allowed for the identification of a specific noise production mechanism, viz the wave-induced stress, and has indicated the effect of coherent structure amplitude and growth and decay characteristics on noise levels produced in the acoustic far field.
Liu, Ming-Qi; Zeng, Wen-Feng; Fang, Pan; Cao, Wei-Qian; Liu, Chao; Yan, Guo-Quan; Zhang, Yang; Peng, Chao; Wu, Jian-Qiang; Zhang, Xiao-Jin; Tu, Hui-Jun; Chi, Hao; Sun, Rui-Xiang; Cao, Yong; Dong, Meng-Qiu; Jiang, Bi-Yun; Huang, Jiang-Ming; Shen, Hua-Li; Wong, Catherine C L; He, Si-Min; Yang, Peng-Yuan
2017-09-05
The precise and large-scale identification of intact glycopeptides is a critical step in glycoproteomics. Owing to the complexity of glycosylation, the current overall throughput, data quality and accessibility of intact glycopeptide identification lack behind those in routine proteomic analyses. Here, we propose a workflow for the precise high-throughput identification of intact N-glycopeptides at the proteome scale using stepped-energy fragmentation and a dedicated search engine. pGlyco 2.0 conducts comprehensive quality control including false discovery rate evaluation at all three levels of matches to glycans, peptides and glycopeptides, improving the current level of accuracy of intact glycopeptide identification. The N-glycoproteome of samples metabolically labeled with 15 N/ 13 C were analyzed quantitatively and utilized to validate the glycopeptide identification, which could be used as a novel benchmark pipeline to compare different search engines. Finally, we report a large-scale glycoproteome dataset consisting of 10,009 distinct site-specific N-glycans on 1988 glycosylation sites from 955 glycoproteins in five mouse tissues.Protein glycosylation is a heterogeneous post-translational modification that generates greater proteomic diversity that is difficult to analyze. Here the authors describe pGlyco 2.0, a workflow for the precise one step identification of intact N-glycopeptides at the proteome scale.
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
Competitive code-based fast palmprint identification using a set of cover trees
NASA Astrophysics Data System (ADS)
Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan
2009-06-01
A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.
Michael K. Young; Kevin S. McKelvey; Kristine L. Pilgrim; Michael K. Schwartz
2013-01-01
There is growing interest in broad-scale biodiversity assessments that can serve as benchmarks for identifying ecological change. Genetic tools have been used for such assessments for decades, but spatial sampling considerations have largely been ignored. Here, we demonstrate how intensive sampling efforts across a large geographical scale can influence identification...
Identification and measurement of shrub type vegetation on large scale aerial photography
NASA Technical Reports Server (NTRS)
Driscoll, R. S.
1970-01-01
Important range-shrub species were identified at acceptable levels of accuracy on large-scale 70 mm color and color infrared aerial photographs. Identification of individual shrubs was significantly higher, however, on color infrared. Photoscales smaller than 1:2400 had limited value except for mature individuals of relatively tall species, and then only if crown margins did not overlap and sharp contrast was evident between the species and background. Larger scale photos were required for low-growing species in dense stands. The crown cover for individual species was estimated from the aerial photos either with a measuring magnifier or a projected-scale micrometer. These crown cover measurements provide techniques for earth-resource analyses when used in conjunction with space and high-altitude remotely procured photos.
Sadygov, Rovshan G; Cociorva, Daniel; Yates, John R
2004-12-01
Database searching is an essential element of large-scale proteomics. Because these methods are widely used, it is important to understand the rationale of the algorithms. Most algorithms are based on concepts first developed in SEQUEST and PeptideSearch. Four basic approaches are used to determine a match between a spectrum and sequence: descriptive, interpretative, stochastic and probability-based matching. We review the basic concepts used by most search algorithms, the computational modeling of peptide identification and current challenges and limitations of this approach for protein identification.
1980-10-01
Development; Problem Identification and Assessment for Aquatic Plant Management; Natural Succession of Aquatic Plants; Large-Scale Operations Management Test...of Insects and Pathogens for Control of Waterhyacinth in Louisiana; Large-Scale Operations Management Test to Evaluate Prevention Methodology for...Control of Eurasian Watermilfoil in Washington; Large-Scale Operations Management Test Using the White Amur at Lake Conway, Florida; and Aquatic Plant Control Activities in the Panama Canal Zone.
Incorporation of DNA barcoding into a large-scale biomonitoring program: opportunities and pitfalls
Taxonomic identification of benthic macroinvertebrates is critical to protocols used to assess the biological integrity of aquatic ecosystems. The time, expense, and inherent error rate of species-level morphological identifications has necessitated use of genus- or family-level ...
Target-decoy Based False Discovery Rate Estimation for Large-scale Metabolite Identification.
Wang, Xusheng; Jones, Drew R; Shaw, Timothy I; Cho, Ji-Hoon; Wang, Yuanyuan; Tan, Haiyan; Xie, Boer; Zhou, Suiping; Li, Yuxin; Peng, Junmin
2018-05-23
Metabolite identification is a crucial step in mass spectrometry (MS)-based metabolomics. However, it is still challenging to assess the confidence of assigned metabolites. In this study, we report a novel method for estimating false discovery rate (FDR) of metabolite assignment with a target-decoy strategy, in which the decoys are generated through violating the octet rule of chemistry by adding small odd numbers of hydrogen atoms. The target-decoy strategy was integrated into JUMPm, an automated metabolite identification pipeline for large-scale MS analysis, and was also evaluated with two other metabolomics tools, mzMatch and mzMine 2. The reliability of FDR calculation was examined by false datasets, which were simulated by altering MS1 or MS2 spectra. Finally, we used the JUMPm pipeline coupled with the target-decoy strategy to process unlabeled and stable-isotope labeled metabolomic datasets. The results demonstrate that the target-decoy strategy is a simple and effective method for evaluating the confidence of high-throughput metabolite identification.
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Khan, Sammyh S; Hopkins, Nick; Tewari, Shruti; Srinivasan, Narayanan; Reicher, Stephen David; Ozakinci, Gozde
2014-01-01
Identifying with a group can contribute to a sense of well-being. The mechanisms involved are diverse: social identification with a group can impact individuals' beliefs about issues such as their connections with others, the availability of social support, the meaningfulness of existence, and the continuity of their identity. Yet, there seems to be a common theme to these mechanisms: identification with a group encourages the belief that one can cope with the stressors one faces (which is associated with better well-being). Our research investigated the relationship between identification, beliefs about coping, and well-being in a survey (N = 792) administered in rural North India. Using structural equation modelling, we found that social identification as a Hindu had positive and indirect associations with three measures of well-being through the belief that one can cope with everyday stressors. We also found residual associations between participants' social identification as a Hindu and two measures of well-being in which higher identification was associated with poorer well-being. We discuss these findings and their implication for understanding the relationship between social identification (especially with large-scale group memberships) and well-being. We also discuss the application of social psychological theory developed in the urban West to rural north India. © 2014 The Authors. European Journal of Social Psychology published by John Wiley & Sons, Ltd. PMID:26160989
Wang, Jian; Anania, Veronica G.; Knott, Jeff; Rush, John; Lill, Jennie R.; Bourne, Philip E.; Bandeira, Nuno
2014-01-01
The combination of chemical cross-linking and mass spectrometry has recently been shown to constitute a powerful tool for studying protein–protein interactions and elucidating the structure of large protein complexes. However, computational methods for interpreting the complex MS/MS spectra from linked peptides are still in their infancy, making the high-throughput application of this approach largely impractical. Because of the lack of large annotated datasets, most current approaches do not capture the specific fragmentation patterns of linked peptides and therefore are not optimal for the identification of cross-linked peptides. Here we propose a generic approach to address this problem and demonstrate it using disulfide-bridged peptide libraries to (i) efficiently generate large mass spectral reference data for linked peptides at a low cost and (ii) automatically train an algorithm that can efficiently and accurately identify linked peptides from MS/MS spectra. We show that using this approach we were able to identify thousands of MS/MS spectra from disulfide-bridged peptides through comparison with proteome-scale sequence databases and significantly improve the sensitivity of cross-linked peptide identification. This allowed us to identify 60% more direct pairwise interactions between the protein subunits in the 20S proteasome complex than existing tools on cross-linking studies of the proteasome complexes. The basic framework of this approach and the MS/MS reference dataset generated should be valuable resources for the future development of new tools for the identification of linked peptides. PMID:24493012
NASA Astrophysics Data System (ADS)
Zhuang, Wei; Mountrakis, Giorgos
2014-09-01
Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.
Making Sense of Minority Student Identification in Special Education: School Context Matters
ERIC Educational Resources Information Center
Talbott, Elizabeth; Fleming, Jane; Karabatsos, George; Dobria, Lidia
2011-01-01
Since the inception of special education, researchers have identified higher proportions of minority students with disabilities than expected. Yet, relatively few studies have considered the contributions of the school context on a large scale to the identification of students with mental retardation (MR), emotional disturbance (ED), and learning…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pankov, A. A., E-mail: pankov@ictp.it; Serenkova, I. A., E-mail: inna.serenkova@cern.ch; Tsytrinov, A. V., E-mail: tsytrin@gstu.by
2015-06-15
Prospects of discovering and identifying effects of extra spatial dimensions in dilepton and diphoton production at the Large Hadron Collider (LHC) are studied. Such effects may be revealed by the characteristic behavior of the invariant-mass distributions of dileptons and diphotons, and their identification can be performed on the basis of an analysis of their angular distributions. The discovery and identification reaches are estimated for the scale parameter M{sub S} of the Kaluza-Klein gravitational towers, which can be determined in experiments devoted to measuring the dilepton and diphoton channels at the LHC.
USDA-ARS?s Scientific Manuscript database
Long noncoding RNAs (lncRNAs) have been recognized in recent years as key regulators of diverse cellular processes. Genome-wide large-scale projects have uncovered thousands of lncRNAs in many model organisms. Large intergenic noncoding RNAs (lincRNAs) are lncRNAs that are transcribed from intergeni...
The large-scale distribution of galaxies
NASA Technical Reports Server (NTRS)
Geller, Margaret J.
1989-01-01
The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.
Identification of Phosphorylated Proteins on a Global Scale.
Iliuk, Anton
2018-05-31
Liquid chromatography (LC) coupled with tandem mass spectrometry (MS/MS) has enabled researchers to analyze complex biological samples with unprecedented depth. It facilitates the identification and quantification of modifications within thousands of proteins in a single large-scale proteomic experiment. Analysis of phosphorylation, one of the most common and important post-translational modifications, has particularly benefited from such progress in the field. Here, detailed protocols are provided for a few well-regarded, common sample preparation methods for an effective phosphoproteomic experiment. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
Sugahara, Daisuke; Kaji, Hiroyuki; Sugihara, Kazushi; Asano, Masahide; Narimatsu, Hisashi
2012-01-01
Model organisms containing deletion or mutation in a glycosyltransferase-gene exhibit various physiological abnormalities, suggesting that specific glycan motifs on certain proteins play important roles in vivo. Identification of the target proteins of glycosyltransferase isozymes is the key to understand the roles of glycans. Here, we demonstrated the proteome-scale identification of the target proteins specific for a glycosyltransferase isozyme, β1,4-galactosyltransferase-I (β4GalT-I). Although β4GalT-I is the most characterized glycosyltransferase, its distinctive contribution to β1,4-galactosylation has been hardly described so far. We identified a large number of candidates for the target proteins specific to β4GalT-I by comparative analysis of β4GalT-I-deleted and wild-type mice using the LC/MS-based technique with the isotope-coded glycosylation site-specific tagging (IGOT) of lectin-captured N-glycopeptides. Our approach to identify the target proteins in a proteome-scale offers common features and trends in the target proteins, which facilitate understanding of the mechanism that controls assembly of a particular glycan motif on specific proteins. PMID:23002422
Detection and classification of ash dieback on large-scale color aerial photographs
Ralph J. Croxton
1966-01-01
Aerial color photographs were taken at two scales over ash stands in New York State that were infected with ash dieback. Three photo interpreters then attempted to distinguish ash trees from other hardwoods and classify their disease condition. The scale of 1:7,920 was too small to permit accurate identification, but accuracy at the scale 1:1,584 was fair (60 to 70...
Grant Development for Large Scale Research Proposals: An Overview and Case Study
ERIC Educational Resources Information Center
Goodman, Ira S.
2011-01-01
With some NIH pay lines running at or below the 10th percentile, and funding becoming scarce for large science grants, new approaches are necessary to secure large interdisciplinary grant awards. The UCSD Moores Cancer Center has developed a team approach, starting with the identification of a competitive opportunity and progressing to the…
Genetics of Resistant Hypertension: the Missing Heritability and Opportunities.
Teixeira, Samantha K; Pereira, Alexandre C; Krieger, Jose E
2018-05-19
Blood pressure regulation in humans has long been known to be a genetically determined trait. The identification of causal genetic modulators for this trait has been unfulfilling at the least. Despite the recent advances of genome-wide genetic studies, loci associated with hypertension or blood pressure still explain a very low percentage of the overall variation of blood pressure in the general population. This has precluded the translation of discoveries in the genetics of human hypertension to clinical use. Here, we propose the combined use of resistant hypertension as a trait for mapping genetic determinants in humans and the integration of new large-scale technologies to approach in model systems the multidimensional nature of the problem. New large-scale efforts in the genetic and genomic arenas are paving the way for an increased and granular understanding of genetic determinants of hypertension. New technologies for whole genome sequence and large-scale forward genetic screens can help prioritize gene and gene-pathways for downstream characterization and large-scale population studies, and guided pharmacological design can be used to drive discoveries to the translational application through better risk stratification and new therapeutic approaches. Although significant challenges remain in the mapping and identification of genetic determinants of hypertension, new large-scale technological approaches have been proposed to surpass some of the shortcomings that have limited progress in the area for the last three decades. The incorporation of these technologies to hypertension research may significantly help in the understanding of inter-individual blood pressure variation and the deployment of new phenotyping and treatment approaches for the condition.
Watanabe, Shinya; Ito, Teruyo; Morimoto, Yuh; Takeuchi, Fumihiko; Hiramatsu, Keiichi
2007-04-01
Large-scale chromosomal inversions (455 to 535 kbp) or deletions (266 to 320 kbp) were found to accompany spontaneous loss of beta-lactam resistance during drug-free passage of the multiresistant Staphylococcus haemolyticus clinical strain JCSC1435. Identification and sequencing of the rearranged chromosomal loci revealed that ISSha1 of S. haemolyticus is responsible for the chromosome rearrangements.
A global traveling wave on Venus
NASA Technical Reports Server (NTRS)
Smith, Michael D.; Gierasch, Peter J.; Schinder, Paul J.
1993-01-01
The dominant large-scale pattern in the clouds of Venus has been described as a 'Y' or 'Psi' and tentatively identified by earlier workers as a Kelvin wave. A detailed calculation of linear wave modes in the Venus atmosphere verifies this identification. Cloud feedback by infrared heating fluctuations is a plausible excitation mechanism. Modulation of the large-scale pattern by the wave is a possible explanation for the Y. Momentum transfer by the wave could contribute to sustaining the general circulation.
Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki
2015-06-01
Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.
Parameter identification of civil engineering structures
NASA Technical Reports Server (NTRS)
Juang, J. N.; Sun, C. T.
1980-01-01
This paper concerns the development of an identification method required in determining structural parameter variations for systems subjected to an extended exposure to the environment. The concept of structural identifiability of a large scale structural system in the absence of damping is presented. Three criteria are established indicating that a large number of system parameters (the coefficient parameters of the differential equations) can be identified by a few actuators and sensors. An eight-bay-fifteen-story frame structure is used as example. A simple model is employed for analyzing the dynamic response of the frame structure.
ERIC Educational Resources Information Center
Perfect, Timothy J.; Weber, Nathan
2012-01-01
Explorations of memory accuracy control normally contrast forced-report with free-report performance across a set of items and show a trade-off between memory quantity and accuracy. However, this memory control framework has not been tested with lineup identifications that may involve rejection of all alternatives. A large-scale (N = 439) lineup…
Groups of galaxies in the Center for Astrophysics redshift survey
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1989-01-01
By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.
Endara, María-José; Coley, Phyllis D; Wiggins, Natasha L; Forrister, Dale L; Younkin, Gordon C; Nicholls, James A; Pennington, R Toby; Dexter, Kyle G; Kidner, Catherine A; Stone, Graham N; Kursar, Thomas A
2018-04-01
The need for species identification and taxonomic discovery has led to the development of innovative technologies for large-scale plant identification. DNA barcoding has been useful, but fails to distinguish among many species in species-rich plant genera, particularly in tropical regions. Here, we show that chemical fingerprinting, or 'chemocoding', has great potential for plant identification in challenging tropical biomes. Using untargeted metabolomics in combination with multivariate analysis, we constructed species-level fingerprints, which we define as chemocoding. We evaluated the utility of chemocoding with species that were defined morphologically and subject to next-generation DNA sequencing in the diverse and recently radiated neotropical genus Inga (Leguminosae), both at single study sites and across broad geographic scales. Our results show that chemocoding is a robust method for distinguishing morphologically similar species at a single site and for identifying widespread species across continental-scale ranges. Given that species are the fundamental unit of analysis for conservation and biodiversity research, the development of accurate identification methods is essential. We suggest that chemocoding will be a valuable additional source of data for a quick identification of plants, especially for groups where other methods fall short. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.
2016-01-01
Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624
USDA-ARS?s Scientific Manuscript database
Glycosylation is a common post-translational modification of plant proteins that impacts a large number of important biological processes. Nevertheless, the impacts of differential site occupancy and the nature of specific glycoforms are obscure. Historically, characterization of glycoproteins has b...
Aerial photo guide to New England forest cover types
Rachel Riemann Hershey; William A. Befort
1995-01-01
NOTE large file size. Presents color infrared photos in stereo pairs for the identification of New England forest cover types. Depicts range maps, ecological relations, and range of composition for each forest cover type described. The guide is designed to assist the needs of interpreters of medium to large-scale color infrared aerial photography.
Resources for Functional Genomics Studies in Drosophila melanogaster
Mohr, Stephanie E.; Hu, Yanhui; Kim, Kevin; Housden, Benjamin E.; Perrimon, Norbert
2014-01-01
Drosophila melanogaster has become a system of choice for functional genomic studies. Many resources, including online databases and software tools, are now available to support design or identification of relevant fly stocks and reagents or analysis and mining of existing functional genomic, transcriptomic, proteomic, etc. datasets. These include large community collections of fly stocks and plasmid clones, “meta” information sites like FlyBase and FlyMine, and an increasing number of more specialized reagents, databases, and online tools. Here, we introduce key resources useful to plan large-scale functional genomics studies in Drosophila and to analyze, integrate, and mine the results of those studies in ways that facilitate identification of highest-confidence results and generation of new hypotheses. We also discuss ways in which existing resources can be used and might be improved and suggest a few areas of future development that would further support large- and small-scale studies in Drosophila and facilitate use of Drosophila information by the research community more generally. PMID:24653003
Musical expertise is related to altered functional connectivity during audiovisual integration
Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo
2015-01-01
The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305
Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.
2015-01-01
Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798
Wolters, Mark A; Dean, C B
2017-01-01
Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.
Searching for the elusive gift: advances in talent identification in sport.
Mann, David L; Dehghansai, Nima; Baker, Joseph
2017-08-01
The incentives for sport organizations to identify talented athletes from a young age continue to grow, yet effective talent identification remains a challenging task. This opinion paper examines recent advances in talent identification, focusing in particular on the emergence of new approaches that may offer promise to identify talent (e.g., small-sided games, genetic testing, and advanced statistical analyses). We appraise new multi-disciplinary and large-scale population studies of talent identification, provide a consideration of the most recent psychological predictors of performance, examine the emergence of new approaches that strive to diminish biases in talent identification, and look at the rise in interest in talent identification in Paralympic sport. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre
2011-01-01
The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686
Volkov, A V; Kolkutin, V V; Klevno, V A; Shkol'nikov, B V; Kornienko, I V
2008-01-01
Managerial experience is described that was gained during the large-scale work on victim identification following mass casualties in the Tu 154-M and Airbus A310 passenger plane crashes. The authors emphasize the necessity to set up a specialized agency of constant readiness meeting modern requirements for the implementation of a system of measures for personality identification. This agency must incorporate relevant departments of the Ministries of Health, Defense, and Emergency Situations as well as investigative authorities and other organizations.
Large Scale Single Nucleotide Polymorphism Study of PD Susceptibility
2005-03-01
identification of eight genetic loci in the familial PD, the results of intensive investigations of polymorphisms in dozens of genes related to sporadic, late...1) investigate the association between classical, sporadic PD and 2386 SNPs in 23 genes implicated in the pathogenesis of PD; (2) construct...addition, experiences derived from this study may be applied in other complex disorders for the identification of susceptibility genes , as well as in genome
Kilpatrick, David R; Yang, Chen-Fu; Ching, Karen; Vincent, Annelet; Iber, Jane; Campagnoli, Ray; Mandelbaum, Mark; De, Lina; Yang, Su-Ju; Nix, Allan; Kew, Olen M
2009-06-01
We have adapted our previously described poliovirus diagnostic reverse transcription-PCR (RT-PCR) assays to a real-time RT-PCR (rRT-PCR) format. Our highly specific assays and rRT-PCR reagents are designed for use in the WHO Global Polio Laboratory Network for rapid and large-scale identification of poliovirus field isolates.
ERIC Educational Resources Information Center
Kurup, Anitha; Maithreyi, R.
2012-01-01
Large-scale sequential research developments for identification and measurement of giftedness have received ample attention in the West, whereas India's response to this has largely been lukewarm. The wide variation in parents' abilities to provide enriched environments to nurture their children's potential makes it imperative for India to develop…
Advancing the large-scale CCS database for metabolomics and lipidomics at the machine-learning era.
Zhou, Zhiwei; Tu, Jia; Zhu, Zheng-Jiang
2018-02-01
Metabolomics and lipidomics aim to comprehensively measure the dynamic changes of all metabolites and lipids that are present in biological systems. The use of ion mobility-mass spectrometry (IM-MS) for metabolomics and lipidomics has facilitated the separation and the identification of metabolites and lipids in complex biological samples. The collision cross-section (CCS) value derived from IM-MS is a valuable physiochemical property for the unambiguous identification of metabolites and lipids. However, CCS values obtained from experimental measurement and computational modeling are limited available, which significantly restricts the application of IM-MS. In this review, we will discuss the recently developed machine-learning based prediction approach, which could efficiently generate precise CCS databases in a large scale. We will also highlight the applications of CCS databases to support metabolomics and lipidomics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exhaustive identification of steady state cycles in large stoichiometric networks
Wright, Jeremiah; Wagner, Andreas
2008-01-01
Background Identifying cyclic pathways in chemical reaction networks is important, because such cycles may indicate in silico violation of energy conservation, or the existence of feedback in vivo. Unfortunately, our ability to identify cycles in stoichiometric networks, such as signal transduction and genome-scale metabolic networks, has been hampered by the computational complexity of the methods currently used. Results We describe a new algorithm for the identification of cycles in stoichiometric networks, and we compare its performance to two others by exhaustively identifying the cycles contained in the genome-scale metabolic networks of H. pylori, M. barkeri, E. coli, and S. cerevisiae. Our algorithm can substantially decrease both the execution time and maximum memory usage in comparison to the two previous algorithms. Conclusion The algorithm we describe improves our ability to study large, real-world, biochemical reaction networks, although additional methodological improvements are desirable. PMID:18616835
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Tools for phospho- and glycoproteomics of plasma membranes.
Wiśniewski, Jacek R
2011-07-01
Analysis of plasma membrane proteins and their posttranslational modifications is considered as important for identification of disease markers and targets for drug treatment. Due to their insolubility in water, studying of plasma membrane proteins using mass spectrometry has been difficult for a long time. Recent technological developments in sample preparation together with important improvements in mass spectrometric analysis have facilitated analysis of these proteins and their posttranslational modifications. Now, large scale proteomic analyses allow identification of thousands of membrane proteins from minute amounts of sample. Optimized protocols for affinity enrichment of phosphorylated and glycosylated peptides have set new dimensions in the depth of characterization of these posttranslational modifications of plasma membrane proteins. Here, I summarize recent advances in proteomic technology for the characterization of the cell surface proteins and their modifications. In the focus are approaches allowing large scale mapping rather than analytical methods suitable for studying individual proteins or non-complex mixtures.
Accurate population genetic measurements require cryptic species identification in corals
NASA Astrophysics Data System (ADS)
Sheets, Elizabeth A.; Warner, Patricia A.; Palumbi, Stephen R.
2018-06-01
Correct identification of closely related species is important for reliable measures of gene flow. Incorrectly lumping individuals of different species together has been shown to over- or underestimate population differentiation, but examples highlighting when these different results are observed in empirical datasets are rare. Using 199 single nucleotide polymorphisms, we assigned 768 individuals in the Acropora hyacinthus and A. cytherea morphospecies complexes to each of eight previously identified cryptic genetic species and measured intraspecific genetic differentiation across three geographic scales (within reefs, among reefs within an archipelago, and among Pacific archipelagos). We then compared these calculations to estimated genetic differentiation at each scale with all cryptic genetic species mixed as if we could not tell them apart. At the reef scale, correct genetic species identification yielded lower F ST estimates and fewer significant comparisons than when species were mixed, raising estimates of short-scale gene flow. In contrast, correct genetic species identification at large spatial scales yielded higher F ST measurements than mixed-species comparisons, lowering estimates of long-term gene flow among archipelagos. A meta-analysis of published population genetic studies in corals found similar results: F ST estimates at small spatial scales were lower and significance was found less often in studies that controlled for cryptic species. Our results and these prior datasets controlling for cryptic species suggest that genetic differentiation among local reefs may be lower than what has generally been reported in the literature. Not properly controlling for cryptic species structure can bias population genetic analyses in different directions across spatial scales, and this has important implications for conservation strategies that rely on these estimates.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.; Hudnall, W. H.
1987-01-01
The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.
Suchard, Marc A; Zorych, Ivan; Simpson, Shawn E; Schuemie, Martijn J; Ryan, Patrick B; Madigan, David
2013-10-01
The self-controlled case series (SCCS) offers potential as an statistical method for risk identification involving medical products from large-scale observational healthcare data. However, analytic design choices remain in encoding the longitudinal health records into the SCCS framework and its risk identification performance across real-world databases is unknown. To evaluate the performance of SCCS and its design choices as a tool for risk identification in observational healthcare data. We examined the risk identification performance of SCCS across five design choices using 399 drug-health outcome pairs in five real observational databases (four administrative claims and one electronic health records). In these databases, the pairs involve 165 positive controls and 234 negative controls. We also consider several synthetic databases with known relative risks between drug-outcome pairs. We evaluate risk identification performance through estimating the area under the receiver-operator characteristics curve (AUC) and bias and coverage probability in the synthetic examples. The SCCS achieves strong predictive performance. Twelve of the twenty health outcome-database scenarios return AUCs >0.75 across all drugs. Including all adverse events instead of just the first per patient and applying a multivariate adjustment for concomitant drug use are the most important design choices. However, the SCCS as applied here returns relative risk point-estimates biased towards the null value of 1 with low coverage probability. The SCCS recently extended to apply a multivariate adjustment for concomitant drug use offers promise as a statistical tool for risk identification in large-scale observational healthcare databases. Poor estimator calibration dampens enthusiasm, but on-going work should correct this short-coming.
Mitchell, Joshua M.; Fan, Teresa W.-M.; Lane, Andrew N.; Moseley, Hunter N. B.
2014-01-01
Large-scale identification of metabolites is key to elucidating and modeling metabolism at the systems level. Advances in metabolomics technologies, particularly ultra-high resolution mass spectrometry (MS) enable comprehensive and rapid analysis of metabolites. However, a significant barrier to meaningful data interpretation is the identification of a wide range of metabolites including unknowns and the determination of their role(s) in various metabolic networks. Chemoselective (CS) probes to tag metabolite functional groups combined with high mass accuracy provide additional structural constraints for metabolite identification and quantification. We have developed a novel algorithm, Chemically Aware Substructure Search (CASS) that efficiently detects functional groups within existing metabolite databases, allowing for combined molecular formula and functional group (from CS tagging) queries to aid in metabolite identification without a priori knowledge. Analysis of the isomeric compounds in both Human Metabolome Database (HMDB) and KEGG Ligand demonstrated a high percentage of isomeric molecular formulae (43 and 28%, respectively), indicating the necessity for techniques such as CS-tagging. Furthermore, these two databases have only moderate overlap in molecular formulae. Thus, it is prudent to use multiple databases in metabolite assignment, since each major metabolite database represents different portions of metabolism within the biosphere. In silico analysis of various CS-tagging strategies under different conditions for adduct formation demonstrate that combined FT-MS derived molecular formulae and CS-tagging can uniquely identify up to 71% of KEGG and 37% of the combined KEGG/HMDB database vs. 41 and 17%, respectively without adduct formation. This difference between database isomer disambiguation highlights the strength of CS-tagging for non-lipid metabolite identification. However, unique identification of complex lipids still needs additional information. PMID:25120557
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
A plea for a global natural history collection - online
USDA-ARS?s Scientific Manuscript database
Species are the currency of comparative biology: scientists from many biological disciplines, including community ecology, conservation biology, pest management, and biological control rely on scientifically sound, objective species data. However, large-scale species identifications are often not fe...
MIPHENO: Data normalization for high throughput metabolic analysis.
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
K. Bruce Jones; Anne C. Neale; Timothy G. Wade; James D. Wickham; Chad L. Cross; Curtis M. Edmonds; Thomas R. Loveland; Maliha S. Nash; Kurt H. Riitters; Elizabeth R. Smith
2001-01-01
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioitizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has been the development of methods to conduct such broad-scale assessments. Field-based methods have...
Research on large-scale wind farm modeling
NASA Astrophysics Data System (ADS)
Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng
2017-01-01
Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.
The PREP pipeline: standardized preprocessing for large-scale EEG analysis.
Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A
2015-01-01
The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.
NASA Technical Reports Server (NTRS)
Gradwohl, Ben-Ami
1991-01-01
The universe may have undergone a superfluid-like phase during its evolution, resulting from the injection of nontopological charge into the spontaneously broken vacuum. In the presence of vortices this charge is identified with angular momentum. This leads to turbulent domains on the scale of the correlation length. By restoring the symmetry at low temperatures, the vortices dissociate and push the charges to the boundaries of these domains. The model can be scaled (phenomenologically) to very low energies, it can be incorporated in a late time phase transition and form large scale structure in the boundary layers of the correlation volumes. The novel feature of the model lies in the fact that the dark matter is endowed with coherent motion. The possibilities of identifying this flow around superfluid vortices with the observed large scale bulk motion is discussed. If this identification is possible, then the definite prediction can be made that a more extended map of peculiar velocities would have to reveal large scale circulations in the flow pattern.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Banana production systems: identification of alternative systems for more sustainable production.
Bellamy, Angelina Sanderson
2013-04-01
Large-scale, monoculture production systems dependent on synthetic fertilizers and pesticides, increase yields, but are costly and have deleterious impacts on human health and the environment. This research investigates variations in banana production practices in Costa Rica, to identify alternative systems that combine high productivity and profitability, with reduced reliance on agrochemicals. Farm workers were observed during daily production activities; 39 banana producers and 8 extension workers/researchers were interviewed; and a review of field experiments conducted by the National Banana Corporation between 1997 and 2002 was made. Correspondence analysis showed that there is no structured variation in large-scale banana producers' practices, but two other banana production systems were identified: a small-scale organic system and a small-scale conventional coffee-banana intercropped system. Field-scale research may reveal ways that these practices can be scaled up to achieve a productive and profitable system producing high-quality export bananas with fewer or no pesticides.
A new way to protect privacy in large-scale genome-wide association studies.
Kamm, Liina; Bogdanov, Dan; Laur, Sven; Vilo, Jaak
2013-04-01
Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Supplementary data are available at Bioinformatics online.
Identification of differentially methylated sites with weak methylation effect
USDA-ARS?s Scientific Manuscript database
DNA methylation is an epigenetic alteration crucial for regulating stress responses. Identifying large-scale DNA methylation at single nucleotide resolution is made possible by whole genome bisulfite sequencing. An essential task following the generation of bisulfite sequencing data is to detect dif...
Iris indexing based on local intensity order pattern
NASA Astrophysics Data System (ADS)
Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo
2017-03-01
In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.
Fast Open-World Person Re-Identification.
Zhu, Xiatian; Wu, Botong; Huang, Dongcheng; Zheng, Wei-Shi
2018-05-01
Existing person re-identification (re-id) methods typically assume that: 1) any probe person is guaranteed to appear in the gallery target population during deployment (i.e., closed-world) and 2) the probe set contains only a limited number of people (i.e., small search scale). Both assumptions are artificial and breached in real-world applications, since the probe population in target people search can be extremely vast in practice due to the ambiguity of probe search space boundary. Therefore, it is unrealistic that any probe person is assumed as one target people, and a large-scale search in person images is inherently demanded. In this paper, we introduce a new person re-id search setting, called large scale open-world (LSOW) re-id, characterized by huge size probe images and open person population in search thus more close to practical deployments. Under LSOW, the under-studied problem of person re-id efficiency is essential in addition to that of commonly studied re-id accuracy. We, therefore, develop a novel fast person re-id method, called Cross-view Identity Correlation and vErification (X-ICE) hashing, for joint learning of cross-view identity representation binarisation and discrimination in a unified manner. Extensive comparative experiments on three large-scale benchmarks have been conducted to validate the superiority and advantages of the proposed X-ICE method over a wide range of the state-of-the-art hashing models, person re-id methods, and their combinations.
Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei
2016-10-01
Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP.
Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; Meng, Patrick; Xie, Lei
2016-01-01
Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP. PMID:27716836
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-01-01
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-06-22
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
Identification and Screening of Carcass Pretreatment ...
Technical Fact Sheet Managing the treatment and disposal of large numbers of animal carcasses following a foreign animal disease (FAD) outbreak is a challenging endeavor. Pretreatment of the infectious carcasses might facilitate the disposal of the carcasses by simplifying the transportation, reducing the pathogen load in the carcasses, or by isolating the pathogen from the environment to minimize spread of any pathogens.This brief summarizes information contained in U.S. Environmental Protection Agency (EPA) report (EPA/600/R-15/053) entitled Identification and Screening of Infectious Carcass Pretreatment Alternatives. This brief describes how each of eleven pretreatment methods can be used prior to, and in conjunction with, six commonly used large-scale carcass disposal options
ERIC Educational Resources Information Center
Wolf, Walter A., Ed.
1978-01-01
Reported here are brief descriptions of a common grading and scaling formula for large multi-section courses, an ion exchange amino acid separation and thin layer chromatography identification experiment, a conservation of energy demonstration, a catalyst for synthesizing esters from fatty aids, and an inexpensive method for preparing platinum…
NASA Astrophysics Data System (ADS)
Peng, Yu; Wang, Qinghui; Fan, Min
2017-11-01
When assessing re-vegetation project performance and optimizing land management, identification of the key ecological factors inducing vegetation degradation has crucial implications. Rainfall, temperature, elevation, slope, aspect, land use type, and human disturbance are ecological factors affecting the status of vegetation index. However, at different spatial scales, the key factors may vary. Using Helin County, Inner-Mongolia, China as the study site and combining remote sensing image interpretation, field surveying, and mathematical methods, this study assesses key ecological factors affecting vegetation degradation under different spatial scales in a semi-arid agro-pastoral ecotone. It indicates that the key factors are different at various spatial scales. Elevation, rainfall, and temperature are identified as crucial for all spatial extents. Elevation, rainfall and human disturbance are key factors for small-scale quadrats of 300 m × 300 m and 600 m × 600 m, temperature and land use type are key factors for a medium-scale quadrat of 1 km × 1 km, and rainfall, temperature, and land use are key factors for large-scale quadrats of 2 km × 2 km and 5 km × 5 km. For this region, human disturbance is not the key factor for vegetation degradation across spatial scales. It is necessary to consider spatial scale for the identification of key factors determining vegetation characteristics. The eco-restoration programs at various spatial scales should identify key influencing factors according their scales so as to take effective measurements. The new understanding obtained in this study may help to explore the forces which driving vegetation degradation in the degraded regions in the world.
Identification of MAPK Substrates Using Quantitative Phosphoproteomics.
Zhang, Tong; Schneider, Jacqueline D; Zhu, Ning; Chen, Sixue
2017-01-01
Activation of mitogen-activated protein kinases (MAPKs) under diverse biotic and abiotic factors and identification of an array of downstream MAPK target proteins are hot topics in plant signal transduction. Through interactions with a plethora of substrate proteins, MAPK cascades regulate many physiological processes in the course of plant growth, development, and response to environmental factors. Identification and quantification of potential MAPK substrates are essential, but have been technically challenging. With the recent advancement in phosphoproteomics, here we describe a method that couples metal dioxide for phosphopeptide enrichment with tandem mass tags (TMT) mass spectrometry (MS) for large-scale MAPK substrate identification and quantification. We have applied this method to a transient expression system carrying a wild type (WT) and a constitutive active (CA) version of a MAPK. This combination of genetically engineered MAPKs and phosphoproteomics provides a high-throughput, unbiased analysis of MAPK-triggered phosphorylation changes on the proteome scale. Therefore, it is a robust method for identifying potential MAPK substrates and should be applicable in the study of other kinase cascades in plants as well as in other organisms.
Online video game addiction: identification of addicted adolescent gamers.
Van Rooij, Antonius J; Schoenmakers, Tim M; Vermulst, Ad A; Van den Eijnden, Regina J J M; Van de Mheen, Dike
2011-01-01
To provide empirical data-driven identification of a group of addicted online gamers. Repeated cross-sectional survey study, comprising a longitudinal cohort, conducted in 2008 and 2009. Secondary schools in the Netherlands. Two large samples of Dutch schoolchildren (aged 13-16 years). Compulsive internet use scale, weekly hours of online gaming and psychosocial variables. This study confirms the existence of a small group of addicted online gamers (3%), representing about 1.5% of all children aged 13-16 years in the Netherlands. Although these gamers report addiction-like problems, relationships with decreased psychosocial health were less evident. The identification of a small group of addicted online gamers supports efforts to develop and validate questionnaire scales aimed at measuring the phenomenon of online video game addiction. The findings contribute to the discussion on the inclusion of non-substance addictions in the proposed unified concept of 'Addiction and Related Disorders' for the DSM-V by providing indirect identification and validation of a group of suspected online video game addicts. © 2010 The Authors, Addiction © 2010 Society for the Study of Addiction.
Bio-inspired digital signal processing for fast radionuclide mixture identification
NASA Astrophysics Data System (ADS)
Thevenin, M.; Bichler, O.; Thiam, C.; Bobin, C.; Lourenço, V.
2015-05-01
Countries are trying to equip their public transportation infrastructure with fixed radiation portals and detectors to detect radiological threat. Current works usually focus on neutron detection, which could be useless in the case of dirty bomb that would not use fissile material. Another approach, such as gamma dose rate variation monitoring is a good indication of the presence of radionuclide. However, some legitimate products emit large quantities of natural gamma rays; environment also emits gamma rays naturally. They can lead to false detections. Moreover, such radio-activity could be used to hide a threat such as material to make a dirty bomb. Consequently, radionuclide identification is a requirement and is traditionally performed by gamma spectrometry using unique spectral signature of each radionuclide. These approaches require high-resolution detectors, sufficient integration time to get enough statistics and large computing capacities for data analysis. High-resolution detectors are fragile and costly, making them bad candidates for large scale homeland security applications. Plastic scintillator and NaI detectors fit with such applications but their resolution makes identification difficult, especially radionuclides mixes. This paper proposes an original signal processing strategy based on artificial spiking neural networks to enable fast radionuclide identification at low count rate and for mixture. It presents results obtained for different challenging mixtures of radionuclides using a NaI scintillator. Results show that a correct identification is performed with less than hundred counts and no false identification is reported, enabling quick identification of a moving threat in a public transportation. Further work will focus on using plastic scintillators.
Molecular Identification of XY Sex-Reversed Female and YY Male Channel Catfish
USDA-ARS?s Scientific Manuscript database
Production of channel catfish leads U.S. aquaculture, and monosex culture may provide higher production efficiencies. Determination of phenotypic sex is labor intensive and not practical for large scale culture. Catfish have an X-Y sex determination system with monomorphic sex chromosomes. Hormonal...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Davi C.; Piattella, Oliver F.; Chauvineau, Bertrand, E-mail: davi.rodrigues@cosmo-ufes.org, E-mail: Bertrand.Chauvineau@oca.eu, E-mail: oliver.piattella@pq.cnpq.br
We show that Renormalization Group extensions of the Einstein-Hilbert action for large scale physics are not, in general, a particular case of standard Scalar-Tensor (ST) gravity. We present a new class of ST actions, in which the potential is not necessarily fixed at the action level, and show that this extended ST theory formally contains the Renormalization Group case. We also propose here a Renormalization Group scale setting identification that is explicitly covariant and valid for arbitrary relativistic fluids.
The PREP pipeline: standardized preprocessing for large-scale EEG analysis
Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A.
2015-01-01
The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode. PMID:26150785
Role of optometry school in single day large scale school vision testing
Anuradha, N; Ramani, Krishnakumar
2015-01-01
Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271
De-Identification in Learning Analytics
ERIC Educational Resources Information Center
Khalila, Mohammad; Ebner, Martin
2016-01-01
Learning analytics has reserved its position as an important field in the educational sector. However, the large-scale collection, processing, and analyzing of data has steered the wheel beyond the borders to face an abundance of ethical breaches and constraints. Revealing learners' personal information and attitudes, as well as their activities,…
USDA-ARS?s Scientific Manuscript database
A large-scale challenge experiment using type 2 porcine reproductive and respiratory virus (PRRSV) provided new insights into the pathophysiology of reproductive PRRS in third-trimester pregnant gilts. Deep phenotyping enabled identification of maternal and fetal factors predictive of PRRS severity ...
Spatially explicit identification of status and changes in ecological conditions over large, regional areas is key to targeting and prioritizing areas for potential further study and environmental protection and restoration. A critical limitation to this point has been our abili...
Partial Identification of Treatment Effects: Applications to Generalizability
ERIC Educational Resources Information Center
Chan, Wendy
2016-01-01
Results from large-scale evaluation studies form the foundation of evidence-based policy. The randomized experiment is often considered the gold standard among study designs because the causal impact of a treatment or intervention can be assessed without threats of confounding from external variables. Policy-makers have become increasingly…
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioritizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has bee...
A Glance at Microsatellite Motifs from 454 Sequencing Reads of Watermelon Genomic DNA
USDA-ARS?s Scientific Manuscript database
A single 454 (Life Sciences Sequencing Technology) run of Charleston Gray watermelon (Citrullus lanatus var. lanatus) genomic DNA was performed and sequence data were assembled. A large scale identification of simple sequence repeat (SSR) was performed and SSR sequence data were used for the develo...
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioritizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has been the d...
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
Level-set techniques for facies identification in reservoir modeling
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.; McLaughlin, Dennis
2011-03-01
In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.
Identification and Characterization of Genomic Amplifications in Ovarian Serous Carcinoma
2009-07-01
oncogenes, Rsf1 and Notch3, which were up-regulated in both genomic DNA and transcript levels in ovarian cancer. In a large- scale FISH analysis, Rsf1...associated with worse disease outcome, suggesting that Rsf1 could be potentially used as a prognostic marker in the future (Appendix #1). For the...over- expressed in a recurrent carcinoma. Although the follow-up study in a larger- scale sample size did not demonstrate clear amplification in NAC1
NAGAMINE, Kanetada
2016-01-01
Cosmic-ray muons (CRM) arriving from the sky on the surface of the earth are now known to be used as radiography purposes to explore the inner-structure of large-scale objects and landforms, ranging in thickness from meter to kilometers scale, such as volcanic mountains, blast furnaces, nuclear reactors etc. At the same time, by using muons produced by compact accelerators (CAM), advanced radiography can be realized for objects with a thickness in the sub-millimeter to meter range, with additional exploration capability such as element identification and bio-chemical analysis. In the present report, principles, methods and specific research examples of CRM transmission radiography are summarized after which, principles, methods and perspective views of the future CAM radiography are described. PMID:27725469
Nagamine, Kanetada
2016-01-01
Cosmic-ray muons (CRM) arriving from the sky on the surface of the earth are now known to be used as radiography purposes to explore the inner-structure of large-scale objects and landforms, ranging in thickness from meter to kilometers scale, such as volcanic mountains, blast furnaces, nuclear reactors etc. At the same time, by using muons produced by compact accelerators (CAM), advanced radiography can be realized for objects with a thickness in the sub-millimeter to meter range, with additional exploration capability such as element identification and bio-chemical analysis. In the present report, principles, methods and specific research examples of CRM transmission radiography are summarized after which, principles, methods and perspective views of the future CAM radiography are described.
Mesoscale Dynamical Regimes in the Midlatitudes
NASA Astrophysics Data System (ADS)
Craig, G. C.; Selz, T.
2018-01-01
The atmospheric mesoscales are characterized by a complex variety of meteorological phenomena that defy simple classification. Here a full space-time spectral analysis is carried out, based on a 7 day convection-permitting simulation of springtime midlatitude weather on a large domain. The kinetic energy is largest at synoptic scales, and on the mesoscale it is largely confined to an "advective band" where space and time scales are related by a constant of proportionality which corresponds to a velocity scale of about 10 m s-1. Computing the relative magnitude of different terms in the governing equations allows the identification of five dynamical regimes. These are tentatively identified as quasi-geostrophic flow, propagating gravity waves, stationary gravity waves related to orography, acoustic modes, and a weak temperature gradient regime, where vertical motions are forced by diabatic heating.
2009-01-01
Background Sequence identification of ESTs from non-model species offers distinct challenges particularly when these species have duplicated genomes and when they are phylogenetically distant from sequenced model organisms. For the common carp, an environmental model of aquacultural interest, large numbers of ESTs remained unidentified using BLAST sequence alignment. We have used the expression profiles from large-scale microarray experiments to suggest gene identities. Results Expression profiles from ~700 cDNA microarrays describing responses of 7 major tissues to multiple environmental stressors were used to define a co-expression landscape. This was based on the Pearsons correlation coefficient relating each gene with all other genes, from which a network description provided clusters of highly correlated genes as 'mountains'. We show that these contain genes with known identities and genes with unknown identities, and that the correlation constitutes evidence of identity in the latter. This procedure has suggested identities to 522 of 2701 unknown carp ESTs sequences. We also discriminate several common carp genes and gene isoforms that were not discriminated by BLAST sequence alignment alone. Precision in identification was substantially improved by use of data from multiple tissues and treatments. Conclusion The detailed analysis of co-expression landscapes is a sensitive technique for suggesting an identity for the large number of BLAST unidentified cDNAs generated in EST projects. It is capable of detecting even subtle changes in expression profiles, and thereby of distinguishing genes with a common BLAST identity into different identities. It benefits from the use of multiple treatments or contrasts, and from the large-scale microarray data. PMID:19939286
Motivation: As cancer genomics initiatives move toward comprehensive identification of genetic alterations in cancer, attention is now turning to understanding how interactions among these genes lead to the acquisition of tumor hallmarks. Emerging pharmacological and clinical data suggest a highly promising role of cancer-specific protein-protein interactions (PPIs) as druggable cancer targets. However, large-scale experimental identification of cancer-related PPIs remains challenging, and currently available resources to explore oncogenic PPI networks are limited.
AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data
NASA Astrophysics Data System (ADS)
Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin
2018-01-01
In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
USDA-ARS?s Scientific Manuscript database
Apple trees, either abandoned or cared for, are common on the North American landscape. These trees can live for decades, and therefore represent a record of large- and small-scale agricultural practices through time. Here, we assessed the genetic diversity and identity of 330 unknown apple trees in...
Identification of Preschool Children with Emotional Problems.
ERIC Educational Resources Information Center
Stern, Carolyn; And Others
A large-scale study was designed to assess the extent of emotional disturbance among Head Start children and to provide a consistent basis for selection if therapeutic intervention were indicated. The study's aim was to avoid the problem of shifting baselines by individual teachers for determining the degree to which their children were departing…
Design for a Study of American Youth.
ERIC Educational Resources Information Center
Flanagan, John C.; And Others
Project TALENT is a large-scale, long-range educational research effort aimed at developing methods for the identification, development, and utilization of human talents, which has involved some 440,000 students in 1,353 public, private, and parochial secondary schools in all parts of the country. Data collected through teacher-administered tests,…
ERIC Educational Resources Information Center
Wladis, Claire; Offenholley, Kathleen; George, Michael
2014-01-01
This study hypothesizes that course passing rates in remedial mathematics classes can be improved through early identification of at-risk students using a department-wide midterm, followed by a mandated set of online intervention assignments incorporating immediate and elaborate feedback for all students identified as "at-risk" by their…
ERIC Educational Resources Information Center
Richards, Dean D., IV
2017-01-01
Recruitment and retention of high-quality educators remains problematic throughout our public school systems. This is particularly so for teachers of minority-identifications and in high-poverty, high-minority urban schools and districts. Recent research concerning teacher longevity has typically focused on large-scale investigations of factors of…
Program Development: Identification and Formulation of Desirable Educational Goals.
ERIC Educational Resources Information Center
Goodlad, John I.
In this speech, the author suggests that the success of public schools depends heavily on commitment to and large-scale agreement on educational goals. He examines the difficulty in creating rational programs to carry out specific behavioral goals and the more remote ends usually stated for educational systems. The author then discusses the…
ERIC Educational Resources Information Center
Tarr, James E.; Ross, Daniel J.; McNaught, Melissa D.; Chavez, Oscar; Grouws, Douglas A.; Reys, Robert E.; Sears, Ruthmae; Taylan, R. Didem
2010-01-01
The Comparing Options in Secondary Mathematics: Investigating Curriculum (COSMIC) project is a longitudinal study of student learning from two types of mathematics curricula: integrated and subject-specific. Previous large-scale research studies such as the National Assessment of Educational Progress (NAEP) indicate that numerous variables are…
PepArML: A Meta-Search Peptide Identification Platform
Edwards, Nathan J.
2014-01-01
The PepArML meta-search peptide identification platform provides a unified search interface to seven search engines; a robust cluster, grid, and cloud computing scheduler for large-scale searches; and an unsupervised, model-free, machine-learning-based result combiner, which selects the best peptide identification for each spectrum, estimates false-discovery rates, and outputs pepXML format identifications. The meta-search platform supports Mascot; Tandem with native, k-score, and s-score scoring; OMSSA; MyriMatch; and InsPecT with MS-GF spectral probability scores — reformatting spectral data and constructing search configurations for each search engine on the fly. The combiner selects the best peptide identification for each spectrum based on search engine results and features that model enzymatic digestion, retention time, precursor isotope clusters, mass accuracy, and proteotypic peptide properties, requiring no prior knowledge of feature utility or weighting. The PepArML meta-search peptide identification platform often identifies 2–3 times more spectra than individual search engines at 10% FDR. PMID:25663956
Quality Assessments of Long-Term Quantitative Proteomic Analysis of Breast Cancer Xenograft Tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Jian-Ying; Chen, Lijun; Zhang, Bai
The identification of protein biomarkers requires large-scale analysis of human specimens to achieve statistical significance. In this study, we evaluated the long-term reproducibility of an iTRAQ (isobaric tags for relative and absolute quantification) based quantitative proteomics strategy using one channel for universal normalization across all samples. A total of 307 liquid chromatography tandem mass spectrometric (LC-MS/MS) analyses were completed, generating 107 one-dimensional (1D) LC-MS/MS datasets and 8 offline two-dimensional (2D) LC-MS/MS datasets (25 fractions for each set) for human-in-mouse breast cancer xenograft tissues representative of basal and luminal subtypes. Such large-scale studies require the implementation of robust metrics to assessmore » the contributions of technical and biological variability in the qualitative and quantitative data. Accordingly, we developed a quantification confidence score based on the quality of each peptide-spectrum match (PSM) to remove quantification outliers from each analysis. After combining confidence score filtering and statistical analysis, reproducible protein identification and quantitative results were achieved from LC-MS/MS datasets collected over a 16 month period.« less
Systems identification and the adaptive management of waterfowl in the United States
Williams, B.K.; Nichols, J.D.
2001-01-01
Waterfowl management in the United States is one of the more visible conservation success stories in the United States. It is authorized and supported by appropriate legislative authorities, based on large-scale monitoring programs, and widely accepted by the public. The process is one of only a limited number of large-scale examples of effective collaboration between research and management, integrating scientific information with management in a coherent framework for regulatory decision-making. However, harvest management continues to face some serious technical problems, many of which focus on sequential identification of the resource system in a context of optimal decision-making. The objective of this paper is to provide a theoretical foundation of adaptive harvest management, the approach currently in use in the United States for regulatory decision-making. We lay out the legal and institutional framework for adaptive harvest management and provide a formal description of regulatory decision-making in terms of adaptive optimization. We discuss some technical and institutional challenges in applying adaptive harvest management and focus specifically on methods of estimating resource states for linear resource systems.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
NASA Technical Reports Server (NTRS)
Riley, Peter
2000-01-01
This investigation is concerned with the large-scale evolution and topology of coronal mass ejections (CMEs) in the solar wind. During this reporting period we have focused on several aspects of CME properties, their identification and their evolution in the solar wind. The work included both analysis of Ulysses and ACE observations as well as fluid and magnetohydrodynamic simulations. In addition, we analyzed a series of "density holes" observed in the solar wind, that bear many similarities with CMEs. Finally, this work was communicated to the scientific community at three meetings and has led to three scientific papers that are in various stages of review.
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
Identification of Stevioside Using Tissue Culture-Derived Stevia (Stevia rebaudiana) Leaves
Karim, Md. Ziaul; Uesugi, Daisuke; Nakayama, Noriyuki; Hossain, M. Monzur; Ishihara, Kohji; Hamada, Hiroki
2015-01-01
Stevioside is a natural sweetener from Stevia leaf, which is 300 times sweeter than sugar. It helps to reduce blood sugar levels dramatically and thus can be of benefit to diabetic people. Tissue culture is a very potential modern technology that can be used in large-scale disease-free stevia production throughout the year. We successfully produced stevia plant through in vitro culture for identification of stevioside in this experiment. The present study describes a potential method for identification of stevioside from tissue culture-derived stevia leaf. Stevioside in the sample was identified using HPLC by measuring the retention time. The percentage of stevioside content in the leaf samples was found to be 9.6%. This identification method can be used for commercial production and industrialization of stevia through in vitro culture across the world. PMID:28008268
Mapping ecosystem services for land use planning, the case of Central Kalimantan.
Sumarga, Elham; Hein, Lars
2014-07-01
Indonesia is subject to rapid land use change. One of the main causes for the conversion of land is the rapid expansion of the oil palm sector. Land use change involves a progressive loss of forest cover, with major impacts on biodiversity and global CO2 emissions. Ecosystem services have been proposed as a concept that would facilitate the identification of sustainable land management options, however, the scale of land conversion and its spatial diversity pose particular challenges in Indonesia. The objective of this paper is to analyze how ecosystem services can be mapped at the provincial scale, focusing on Central Kalimantan, and to examine how ecosystem services maps can be used for a land use planning. Central Kalimantan is subject to rapid deforestation including the loss of peatland forests and the provincial still lacks a comprehensive land use plan. We examine how seven key ecosystem services can be mapped and modeled at the provincial scale, using a variety of models, and how large scale ecosystem services maps can support the identification of options for sustainable expansion of palm oil production.
Isolation and characterizations of oxalate-binding proteins in the kidney
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roop-ngam, Piyachat; Chaiyarit, Sakdithep; Pongsakul, Nutkridta
Highlights: Black-Right-Pointing-Pointer The first large-scale characterizations of oxalate-binding kidney proteins. Black-Right-Pointing-Pointer The recently developed oxalate-conjugated EAH Sepharose 4B beads were applied. Black-Right-Pointing-Pointer 38 forms of 26 unique oxalate-binding kidney proteins were identified. Black-Right-Pointing-Pointer 25/26 (96%) of identified proteins had 'L-x(3,5)-R-x(2)-[AGILPV]' domain. -- Abstract: Oxalate-binding proteins are thought to serve as potential modulators of kidney stone formation. However, only few oxalate-binding proteins have been identified from previous studies. Our present study, therefore, aimed for large-scale identification of oxalate-binding proteins in porcine kidney using an oxalate-affinity column containing oxalate-conjugated EAH Sepharose 4B beads for purification followed by two-dimensional gel electrophoresis (2-DE) tomore » resolve the recovered proteins. Comparing with those obtained from the controlled column containing uncoupled EAH-Sepharose 4B (to subtract the background of non-specific bindings), a total of 38 protein spots were defined as oxalate-binding proteins. These protein spots were successfully identified by quadrupole time-of-flight mass spectrometry (MS) and/or tandem MS (MS/MS) as 26 unique proteins, including several nuclear proteins, mitochondrial proteins, oxidative stress regulatory proteins, metabolic enzymes and others. Identification of oxalate-binding domain using the PRATT tool revealed 'L-x(3,5)-R-x(2)-[AGILPV]' as a functional domain responsible for oxalate-binding in 25 of 26 (96%) unique identified proteins. We report herein, for the first time, large-scale identification and characterizations of oxalate-binding proteins in the kidney. The presence of positively charged arginine residue in the middle of this functional domain suggested its significance for binding to the negatively charged oxalate. These data will enhance future stone research, particularly on stone modulators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele
The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directlymore » employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.« less
Galisson, Frederic; Mahrouche, Louiza; Courcelles, Mathieu; Bonneil, Eric; Meloche, Sylvain; Chelbi-Alix, Mounira K.; Thibault, Pierre
2011-01-01
The small ubiquitin-related modifier (SUMO) is a small group of proteins that are reversibly attached to protein substrates to modify their functions. The large scale identification of protein SUMOylation and their modification sites in mammalian cells represents a significant challenge because of the relatively small number of in vivo substrates and the dynamic nature of this modification. We report here a novel proteomics approach to selectively enrich and identify SUMO conjugates from human cells. We stably expressed different SUMO paralogs in HEK293 cells, each containing a His6 tag and a strategically located tryptic cleavage site at the C terminus to facilitate the recovery and identification of SUMOylated peptides by affinity enrichment and mass spectrometry. Tryptic peptides with short SUMO remnants offer significant advantages in large scale SUMOylome experiments including the generation of paralog-specific fragment ions following CID and ETD activation, and the identification of modified peptides using conventional database search engines such as Mascot. We identified 205 unique protein substrates together with 17 precise SUMOylation sites present in 12 SUMO protein conjugates including three new sites (Lys-380, Lys-400, and Lys-497) on the protein promyelocytic leukemia. Label-free quantitative proteomics analyses on purified nuclear extracts from untreated and arsenic trioxide-treated cells revealed that all identified SUMOylated sites of promyelocytic leukemia were differentially SUMOylated upon stimulation. PMID:21098080
Tracking down hyper-boosted top quarks
Larkoski, Andrew J.; Maltoni, Fabio; Selvaggi, Michele
2015-06-05
The identification of hadronically decaying heavy states, such as vector bosons, the Higgs, or the top quark, produced with large transverse boosts has been and will continue to be a central focus of the jet physics program at the Large Hadron Collider (LHC). At a future hadron collider working at an order-of-magnitude larger energy than the LHC, these heavy states would be easily produced with transverse boosts of several TeV. At these energies, their decay products will be separated by angular scales comparable to individual calorimeter cells, making the current jet substructure identification techniques for hadronic decay modes not directlymore » employable. In addition, at the high energy and luminosity projected at a future hadron collider, there will be numerous sources for contamination including initial- and final-state radiation, underlying event, or pile-up which must be mitigated. We propose a simple strategy to tag such "hyper-boosted" objects that defines jets with radii that scale inversely proportional to their transverse boost and combines the standard calorimetric information with charged track-based observables. By means of a fast detector simulation, we apply it to top quark identification and demonstrate that our method efficiently discriminates hadronically decaying top quarks from light QCD jets up to transverse boosts of 20 TeV. Lastly, our results open the way to tagging heavy objects with energies in the multi-TeV range at present and future hadron colliders.« less
Hu, Yuanan; Cheng, Hefa
2013-04-16
As heavy metals occur naturally in soils at measurable concentrations and their natural background contents have significant spatial variations, identification and apportionment of heavy metal pollution sources across large-scale regions is a challenging task. Stochastic models, including the recently developed conditional inference tree (CIT) and the finite mixture distribution model (FMDM), were applied to identify the sources of heavy metals found in the surface soils of the Pearl River Delta, China, and to apportion the contributions from natural background and human activities. Regression trees were successfully developed for the concentrations of Cd, Cu, Zn, Pb, Cr, Ni, As, and Hg in 227 soil samples from a region of over 7.2 × 10(4) km(2) based on seven specific predictors relevant to the source and behavior of heavy metals: land use, soil type, soil organic carbon content, population density, gross domestic product per capita, and the lengths and classes of the roads surrounding the sampling sites. The CIT and FMDM results consistently indicate that Cd, Zn, Cu, Pb, and Cr in the surface soils of the PRD were contributed largely by anthropogenic sources, whereas As, Ni, and Hg in the surface soils mostly originated from the soil parent materials.
Single-shot stand-off chemical identification of powders using random Raman lasing
Hokr, Brett H.; Bixler, Joel N.; Noojin, Gary D.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.; Scully, Marlan O.
2014-01-01
The task of identifying explosives, hazardous chemicals, and biological materials from a safe distance is the subject we consider. Much of the prior work on stand-off spectroscopy using light has been devoted to generating a backward-propagating beam of light that can be used drive further spectroscopic processes. The discovery of random lasing and, more recently, random Raman lasing provide a mechanism for remotely generating copious amounts of chemically specific Raman scattered light. The bright nature of random Raman lasing renders directionality unnecessary, allowing for the detection and identification of chemicals from large distances in real time. In this article, the single-shot remote identification of chemicals at kilometer-scale distances is experimentally demonstrated using random Raman lasing. PMID:25114231
Materials identification using a small-scale pixellated x-ray diffraction system
NASA Astrophysics Data System (ADS)
O'Flynn, D.; Crews, C.; Drakos, I.; Christodoulou, C.; Wilson, M. D.; Veale, M. C.; Seller, P.; Speller, R. D.
2016-05-01
A transmission x-ray diffraction system has been developed using a pixellated, energy-resolving detector (HEXITEC) and a small-scale, mains operated x-ray source (Amptek Mini-X). HEXITEC enables diffraction to be measured without the requirement of incident spectrum filtration, or collimation of the scatter from the sample, preserving a large proportion of the useful signal compared with other diffraction techniques. Due to this efficiency, sufficient molecular information for material identification can be obtained within 5 s despite the relatively low x-ray source power. Diffraction data are presented from caffeine, hexamine, paracetamol, plastic explosives and narcotics. The capability to determine molecular information from aspirin tablets inside their packaging is demonstrated. Material selectivity and the potential for a sample classification model is shown with principal component analysis, through which each different material can be clearly resolved.
NASA Astrophysics Data System (ADS)
Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.
2014-08-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
Shen, Bingquan; Zhang, Wanjun; Shi, Zhaomei; Tian, Fang; Deng, Yulin; Sun, Changqing; Wang, Guangshun; Qin, Weijie; Qian, Xiaohong
2017-07-01
O-GlcNAcylation is a kind of dynamic O-linked glycosylation of nucleocytoplasmic and mitochondrial proteins. It serves as a major nutrient sensor to regulate numerous biological processes including transcriptional regulation, cell metabolism, cellular signaling, and protein degradation. Dysregulation of cellular O-GlcNAcylated levels contributes to the etiologies of many diseases such as diabetes, neurodegenerative disease and cancer. However, deeper insight into the biological mechanism of O-GlcNAcylation is hampered by its extremely low stoichiometry and the lack of efficient enrichment approaches for large-scale identification by mass spectrometry. Herein, we developed a novel strategy for the global identification of O-GlcNAc proteins and peptides using selective enzymatic deglycosylation, HILIC enrichment and mass spectrometry analysis. Standard O-GlcNAc peptides can be efficiently enriched even in the presence of 500-fold more abundant non-O-GlcNAc peptides and identified by mass spectrometry with a low nanogram detection sensitivity. This strategy successfully achieved the first large-scale enrichment and characterization of O-GlcNAc proteins and peptides in human urine. A total of 474 O-GlcNAc peptides corresponding to 457 O-GlcNAc proteins were identified by mass spectrometry analysis, which is at least three times more than that obtained by commonly used enrichment methods. A large number of unreported O-GlcNAc proteins related to cell cycle, biological regulation, metabolic and developmental process were found in our data. The above results demonstrated that this novel strategy is highly efficient in the global enrichment and identification of O-GlcNAc peptides. These data provide new insights into the biological function of O-GlcNAcylation in human urine, which is correlated with the physiological states and pathological changes of human body and therefore indicate the potential of this strategy for biomarker discovery from human urine. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Pluess, Michael; Assary, Elham; Lionetti, Francesca; Lester, Kathryn J.; Krapohl, Eva; Aron, Elaine N.; Aron, Arthur
2018-01-01
A large number of studies document that children differ in the degree they are shaped by their developmental context with some being more sensitive to environmental influences than others. Multiple theories suggest that "Environmental Sensitivity" is a common trait predicting the response to negative as well as positive exposures.…
ERIC Educational Resources Information Center
Schnebele, Emily K.
2013-01-01
Flooding is the most frequently occurring natural hazard on Earth; with catastrophic, large scale floods causing immense damage to people, property, and the environment. Over the past 20 years, remote sensing has become the standard technique for flood identification because of its ability to offer synoptic coverage. Unfortunately, remote sensing…
The UAB Informatics Institute and 2016 CEGS N-GRID de-identification shared task challenge.
Bui, Duy Duc An; Wyatt, Mathew; Cimino, James J
2017-11-01
Clinical narratives (the text notes found in patients' medical records) are important information sources for secondary use in research. However, in order to protect patient privacy, they must be de-identified prior to use. Manual de-identification is considered to be the gold standard approach but is tedious, expensive, slow, and impractical for use with large-scale clinical data. Automated or semi-automated de-identification using computer algorithms is a potentially promising alternative. The Informatics Institute of the University of Alabama at Birmingham is applying de-identification to clinical data drawn from the UAB hospital's electronic medical records system before releasing them for research. We participated in a shared task challenge by the Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-Scale and RDoC Individualized Domains (N-GRID) at the de-identification regular track to gain experience developing our own automatic de-identification tool. We focused on the popular and successful methods from previous challenges: rule-based, dictionary-matching, and machine-learning approaches. We also explored new techniques such as disambiguation rules, term ambiguity measurement, and used multi-pass sieve framework at a micro level. For the challenge's primary measure (strict entity), our submissions achieved competitive results (f-measures: 87.3%, 87.1%, and 86.7%). For our preferred measure (binary token HIPAA), our submissions achieved superior results (f-measures: 93.7%, 93.6%, and 93%). With those encouraging results, we gain the confidence to improve and use the tool for the real de-identification task at the UAB Informatics Institute. Copyright © 2017 Elsevier Inc. All rights reserved.
Body identification, biometrics and medicine: ethical and social considerations.
Mordini, Emilio; Ottolini, Corinna
2007-01-01
Identity is important when it is weak. This apparent paradox is the core of the current debate on identity. Traditionally, verification of identity has been based upon authentication of attributed and biographical characteristics. After small scale societies and large scale, industrial societies, globalization represents the third period of personal identification. The human body lies at the heart of all strategies for identity management. The tension between human body and personal identity is critical in the health care sector. The health care sector is second only to the financial sector in term of the number of biometric users. Many hospitals and healthcare organizations are in progress to deploy biometric security architecture. Secure identification is critical in the health care system, both to control logic access to centralized archives of digitized patients' data, and to limit physical access to buildings and hospital wards, and to authenticate medical and social support personnel. There is also an increasing need to identify patients with a high degree of certainty. Finally there is the risk that biometric authentication devices can significantly reveal any health information. All these issues require a careful ethical and political scrutiny.
Approaches to advancescientific understanding of macrosystems ecology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin
Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less
Jimena: efficient computing and system state identification for genetic regulatory networks.
Karl, Stefan; Dandekar, Thomas
2013-10-11
Boolean networks capture switching behavior of many naturally occurring regulatory networks. For semi-quantitative modeling, interpolation between ON and OFF states is necessary. The high degree polynomial interpolation of Boolean genetic regulatory networks (GRNs) in cellular processes such as apoptosis or proliferation allows for the modeling of a wider range of node interactions than continuous activator-inhibitor models, but suffers from scaling problems for networks which contain nodes with more than ~10 inputs. Many GRNs from literature or new gene expression experiments exceed those limitations and a new approach was developed. (i) As a part of our new GRN simulation framework Jimena we introduce and setup Boolean-tree-based data structures; (ii) corresponding algorithms greatly expedite the calculation of the polynomial interpolation in almost all cases, thereby expanding the range of networks which can be simulated by this model in reasonable time. (iii) Stable states for discrete models are efficiently counted and identified using binary decision diagrams. As application example, we show how system states can now be sampled efficiently in small up to large scale hormone disease networks (Arabidopsis thaliana development and immunity, pathogen Pseudomonas syringae and modulation by cytokinins and plant hormones). Jimena simulates currently available GRNs about 10-100 times faster than the previous implementation of the polynomial interpolation model and even greater gains are achieved for large scale-free networks. This speed-up also facilitates a much more thorough sampling of continuous state spaces which may lead to the identification of new stable states. Mutants of large networks can be constructed and analyzed very quickly enabling new insights into network robustness and behavior.
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
Weis, Karen L; Lederman, Regina P; Walker, Katherine C; Chan, Wenyaw
To determine the efficacy of the Mentors Offering Maternal Support (MOMS) program to reduce pregnancy-specific anxiety and depression and build self-esteem and resilience in military women. Randomized controlled trial with repeated measures. Large military community in Texas. Pregnant women (N = 246) in a military sample defined as active duty or spouse of military personnel. Participants were randomized in the first trimester to the MOMS program or normal prenatal care. Participants attended eight 1-hour sessions every other week during the first, second, and third trimesters of pregnancy. Pregnancy-specific anxiety, depression, self-esteem, and resilience were measured in each trimester. Linear mixed models were used to compare the two-group difference in slope for prenatal anxiety, depression, self-esteem, and resilience. The Prenatal Self-Evaluation Questionnaire was used to measure perinatal anxiety. Rates of prenatal anxiety on the Identification With a Motherhood Role (p = .049) scale and the Preparation for Labor (p = .017) scale were significantly reduced for participants in MOMS. Nulliparous participants showed significantly lower anxiety on the Acceptance of Pregnancy scale and significantly greater anxiety on the Preparation for Labor scale. Single participants had significantly greater anxiety on the Well-Being of Self and Baby in Labor scale, and participants with deployed husbands had significantly greater anxiety on the Identification With a Motherhood Role scale. Participation in the MOMS program reduced pregnancy-specific prenatal anxiety for the dimensions of Identification With a Motherhood Role and Preparation for Labor. Both dimensions of anxiety were previously found to be significantly associated with preterm birth and low birth weight. Military leaders have recognized the urgent need to support military families. Copyright © 2017 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Large-Scale Biomonitoring of Remote and Threatened Ecosystems via High-Throughput Sequencing
Gibson, Joel F.; Shokralla, Shadi; Curry, Colin; Baird, Donald J.; Monk, Wendy A.; King, Ian; Hajibabaei, Mehrdad
2015-01-01
Biodiversity metrics are critical for assessment and monitoring of ecosystems threatened by anthropogenic stressors. Existing sorting and identification methods are too expensive and labour-intensive to be scaled up to meet management needs. Alternately, a high-throughput DNA sequencing approach could be used to determine biodiversity metrics from bulk environmental samples collected as part of a large-scale biomonitoring program. Here we show that both morphological and DNA sequence-based analyses are suitable for recovery of individual taxonomic richness, estimation of proportional abundance, and calculation of biodiversity metrics using a set of 24 benthic samples collected in the Peace-Athabasca Delta region of Canada. The high-throughput sequencing approach was able to recover all metrics with a higher degree of taxonomic resolution than morphological analysis. The reduced cost and increased capacity of DNA sequence-based approaches will finally allow environmental monitoring programs to operate at the geographical and temporal scale required by industrial and regulatory end-users. PMID:26488407
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catfish Genome Consortium; Wang, Shaolin; Peatman, Eric
2010-03-23
Background-Through the Community Sequencing Program, a catfish EST sequencing project was carried out through a collaboration between the catfish research community and the Department of Energy's Joint Genome Institute. Prior to this project, only a limited EST resource from catfish was available for the purpose of SNP identification. Results-A total of 438,321 quality ESTs were generated from 8 channel catfish (Ictalurus punctatus) and 4 blue catfish (Ictalurus furcatus) libraries, bringing the number of catfish ESTs to nearly 500,000. Assembly of all catfish ESTs resulted in 45,306 contigs and 66,272 singletons. Over 35percent of the unique sequences had significant similarities tomore » known genes, allowing the identification of 14,776 unique genes in catfish. Over 300,000 putative SNPs have been identified, of which approximately 48,000 are high-quality SNPs identified from contigs with at least four sequences and the minor allele presence of at least two sequences in the contig. The EST resource should be valuable for identification of microsatellites, genome annotation, large-scale expression analysis, and comparative genome analysis. Conclusions-This project generated a large EST resource for catfish that captured the majority of the catfish transcriptome. The parallel analysis of ESTs from two closely related Ictalurid catfishes should also provide powerful means for the evaluation of ancient and recent gene duplications, and for the development of high-density microarrays in catfish. The inter- and intra-specific SNPs identified from all catfish EST dataset assembly will greatly benefit the catfish introgression breeding program and whole genome association studies.« less
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Maguire, Elizabeth M; Bokhour, Barbara G; Wagner, Todd H; Asch, Steven M; Gifford, Allen L; Gallagher, Thomas H; Durfee, Janet M; Martinello, Richard A; Elwy, A Rani
2016-11-11
Many healthcare organizations have developed disclosure policies for large-scale adverse events, including the Veterans Health Administration (VA). This study evaluated VA's national large-scale disclosure policy and identifies gaps and successes in its implementation. Semi-structured qualitative interviews were conducted with leaders, hospital employees, and patients at nine sites to elicit their perceptions of recent large-scale adverse events notifications and the national disclosure policy. Data were coded using the constructs of the Consolidated Framework for Implementation Research (CFIR). We conducted 97 interviews. Insights included how to handle the communication of large-scale disclosures through multiple levels of a large healthcare organization and manage ongoing communications about the event with employees. Of the 5 CFIR constructs and 26 sub-constructs assessed, seven were prominent in interviews. Leaders and employees specifically mentioned key problem areas involving 1) networks and communications during disclosure, 2) organizational culture, 3) engagement of external change agents during disclosure, and 4) a need for reflecting on and evaluating the policy implementation and disclosure itself. Patients shared 5) preferences for personal outreach by phone in place of the current use of certified letters. All interviewees discussed 6) issues with execution and 7) costs of the disclosure. CFIR analysis reveals key problem areas that need to be addresses during disclosure, including: timely communication patterns throughout the organization, establishing a supportive culture prior to implementation, using patient-approved, effective communications strategies during disclosures; providing follow-up support for employees and patients, and sharing lessons learned.
Zhong, Taiyang; Chen, Dongmei; Zhang, Xiuying
2016-11-09
Identification of the sources of soil mercury (Hg) on the provincial scale is helpful for enacting effective policies to prevent further contamination and take reclamation measurements. The natural and anthropogenic sources and their contributions of Hg in Chinese farmland soil were identified based on a decision tree method. The results showed that the concentrations of Hg in parent materials were most strongly associated with the general spatial distribution pattern of Hg concentration on a provincial scale. The decision tree analysis gained an 89.70% total accuracy in simulating the influence of human activities on the additions of Hg in farmland soil. Human activities-for example, the production of coke, application of fertilizers, discharge of wastewater, discharge of solid waste, and the production of non-ferrous metals-were the main external sources of a large amount of Hg in the farmland soil.
Zhong, Taiyang; Chen, Dongmei; Zhang, Xiuying
2016-01-01
Identification of the sources of soil mercury (Hg) on the provincial scale is helpful for enacting effective policies to prevent further contamination and take reclamation measurements. The natural and anthropogenic sources and their contributions of Hg in Chinese farmland soil were identified based on a decision tree method. The results showed that the concentrations of Hg in parent materials were most strongly associated with the general spatial distribution pattern of Hg concentration on a provincial scale. The decision tree analysis gained an 89.70% total accuracy in simulating the influence of human activities on the additions of Hg in farmland soil. Human activities—for example, the production of coke, application of fertilizers, discharge of wastewater, discharge of solid waste, and the production of non-ferrous metals—were the main external sources of a large amount of Hg in the farmland soil. PMID:27834884
Uncovering Implicit Assumptions: A Large-Scale Study on Students' Mental Models of Diffusion
ERIC Educational Resources Information Center
Stains, Marilyne; Sevian, Hannah
2015-01-01
Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in…
From drug to protein: using yeast genetics for high-throughput target discovery.
Armour, Christopher D; Lum, Pek Yee
2005-02-01
The budding yeast Saccharomyces cerevisiae has long been an effective eukaryotic model system for understanding basic cellular processes. The genetic tractability and ease of manipulation in the laboratory make yeast well suited for large-scale chemical and genetic screens. Several recent studies describing the use of yeast genetics for high-throughput drug target identification are discussed in this review.
ERIC Educational Resources Information Center
Südkamp, Anna; Pohl, Steffi; Weinert, Sabine
2015-01-01
Including students with special educational needs in learning (SEN-L) is a challenge for large-scale assessments. In order to draw inferences with respect to students with SEN-L and to compare their scores to students in general education, one needs to assure that the measurement model is reliable and that the same construct is measured for…
Tsugawa, Hiroshi; Ohta, Erika; Izumi, Yoshihiro; Ogiwara, Atsushi; Yukihira, Daichi; Bamba, Takeshi; Fukusaki, Eiichiro; Arita, Masanori
2014-01-01
Based on theoretically calculated comprehensive lipid libraries, in lipidomics as many as 1000 multiple reaction monitoring (MRM) transitions can be monitored for each single run. On the other hand, lipid analysis from each MRM chromatogram requires tremendous manual efforts to identify and quantify lipid species. Isotopic peaks differing by up to a few atomic masses further complicate analysis. To accelerate the identification and quantification process we developed novel software, MRM-DIFF, for the differential analysis of large-scale MRM assays. It supports a correlation optimized warping (COW) algorithm to align MRM chromatograms and utilizes quality control (QC) sample datasets to automatically adjust the alignment parameters. Moreover, user-defined reference libraries that include the molecular formula, retention time, and MRM transition can be used to identify target lipids and to correct peak abundances by considering isotopic peaks. Here, we demonstrate the software pipeline and introduce key points for MRM-based lipidomics research to reduce the mis-identification and overestimation of lipid profiles. The MRM-DIFF program, example data set and the tutorials are downloadable at the "Standalone software" section of the PRIMe (Platform for RIKEN Metabolomics, http://prime.psc.riken.jp/) database website.
Tsugawa, Hiroshi; Ohta, Erika; Izumi, Yoshihiro; Ogiwara, Atsushi; Yukihira, Daichi; Bamba, Takeshi; Fukusaki, Eiichiro; Arita, Masanori
2015-01-01
Based on theoretically calculated comprehensive lipid libraries, in lipidomics as many as 1000 multiple reaction monitoring (MRM) transitions can be monitored for each single run. On the other hand, lipid analysis from each MRM chromatogram requires tremendous manual efforts to identify and quantify lipid species. Isotopic peaks differing by up to a few atomic masses further complicate analysis. To accelerate the identification and quantification process we developed novel software, MRM-DIFF, for the differential analysis of large-scale MRM assays. It supports a correlation optimized warping (COW) algorithm to align MRM chromatograms and utilizes quality control (QC) sample datasets to automatically adjust the alignment parameters. Moreover, user-defined reference libraries that include the molecular formula, retention time, and MRM transition can be used to identify target lipids and to correct peak abundances by considering isotopic peaks. Here, we demonstrate the software pipeline and introduce key points for MRM-based lipidomics research to reduce the mis-identification and overestimation of lipid profiles. The MRM-DIFF program, example data set and the tutorials are downloadable at the “Standalone software” section of the PRIMe (Platform for RIKEN Metabolomics, http://prime.psc.riken.jp/) database website. PMID:25688256
New Genes and New Insights from Old Genes: Update on Alzheimer Disease
Ringman, John M.; Coppola, Giovanni
2013-01-01
Purpose of Review: This article discusses the current status of knowledge regarding the genetic basis of Alzheimer disease (AD) with a focus on clinically relevant aspects. Recent Findings: The genetic architecture of AD is complex, as it includes multiple susceptibility genes and likely nongenetic factors. Rare but highly penetrant autosomal dominant mutations explain a small minority of the cases but have allowed tremendous advances in understanding disease pathogenesis. The identification of a strong genetic risk factor, APOE, reshaped the field and introduced the notion of genetic risk for AD. More recently, large-scale genome-wide association studies are adding to the picture a number of common variants with very small effect sizes. Large-scale resequencing studies are expected to identify additional risk factors, including rare susceptibility variants and structural variation. Summary: Genetic assessment is currently of limited utility in clinical practice because of the low frequency (Mendelian mutations) or small effect size (common risk factors) of the currently known susceptibility genes. However, genetic studies are identifying with confidence a number of novel risk genes, and this will further our understanding of disease biology and possibly the identification of therapeutic targets. PMID:23558482
Sharan, Malvika; Förstner, Konrad U; Eulalio, Ana; Vogel, Jörg
2017-06-20
RNA-binding proteins (RBPs) have been established as core components of several post-transcriptional gene regulation mechanisms. Experimental techniques such as cross-linking and co-immunoprecipitation have enabled the identification of RBPs, RNA-binding domains (RBDs) and their regulatory roles in the eukaryotic species such as human and yeast in large-scale. In contrast, our knowledge of the number and potential diversity of RBPs in bacteria is poorer due to the technical challenges associated with the existing global screening approaches. We introduce APRICOT, a computational pipeline for the sequence-based identification and characterization of proteins using RBDs known from experimental studies. The pipeline identifies functional motifs in protein sequences using position-specific scoring matrices and Hidden Markov Models of the functional domains and statistically scores them based on a series of sequence-based features. Subsequently, APRICOT identifies putative RBPs and characterizes them by several biological properties. Here we demonstrate the application and adaptability of the pipeline on large-scale protein sets, including the bacterial proteome of Escherichia coli. APRICOT showed better performance on various datasets compared to other existing tools for the sequence-based prediction of RBPs by achieving an average sensitivity and specificity of 0.90 and 0.91 respectively. The command-line tool and its documentation are available at https://pypi.python.org/pypi/bio-apricot. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Caught you: threats to confidentiality due to the public release of large-scale genetic data sets
2010-01-01
Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public. PMID:21190545
Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.
Wjst, Matthias
2010-12-29
Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Large-scale recording of neuronal ensembles.
Buzsáki, György
2004-05-01
How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
hEIDI: An Intuitive Application Tool To Organize and Treat Large-Scale Proteomics Data.
Hesse, Anne-Marie; Dupierris, Véronique; Adam, Claire; Court, Magali; Barthe, Damien; Emadali, Anouk; Masselon, Christophe; Ferro, Myriam; Bruley, Christophe
2016-10-07
Advances in high-throughput proteomics have led to a rapid increase in the number, size, and complexity of the associated data sets. Managing and extracting reliable information from such large series of data sets require the use of dedicated software organized in a consistent pipeline to reduce, validate, exploit, and ultimately export data. The compilation of multiple mass-spectrometry-based identification and quantification results obtained in the context of a large-scale project represents a real challenge for developers of bioinformatics solutions. In response to this challenge, we developed a dedicated software suite called hEIDI to manage and combine both identifications and semiquantitative data related to multiple LC-MS/MS analyses. This paper describes how, through a user-friendly interface, hEIDI can be used to compile analyses and retrieve lists of nonredundant protein groups. Moreover, hEIDI allows direct comparison of series of analyses, on the basis of protein groups, while ensuring consistent protein inference and also computing spectral counts. hEIDI ensures that validated results are compliant with MIAPE guidelines as all information related to samples and results is stored in appropriate databases. Thanks to the database structure, validated results generated within hEIDI can be easily exported in the PRIDE XML format for subsequent publication. hEIDI can be downloaded from http://biodev.extra.cea.fr/docs/heidi .
Amouroux, P; Crochard, D; Germain, J-F; Correa, M; Ampuero, J; Groussier, G; Kreiter, P; Malausa, T; Zaviezo, T
2017-05-17
Scale insects (Sternorrhyncha: Coccoidea) are one of the most invasive and agriculturally damaging insect groups. Their management and the development of new control methods are currently jeopardized by the scarcity of identification data, in particular in regions where no large survey coupling morphological and DNA analyses have been performed. In this study, we sampled 116 populations of armored scales (Hemiptera: Diaspididae) and 112 populations of soft scales (Hemiptera: Coccidae) in Chile, over a latitudinal gradient ranging from 18°S to 41°S, on fruit crops, ornamental plants and trees. We sequenced the COI and 28S genes in each population. In total, 19 Diaspididae species and 11 Coccidae species were identified morphologically. From the 63 COI haplotypes and the 54 28S haplotypes uncovered, and using several DNA data analysis methods (Automatic Barcode Gap Discovery, K2P distance, NJ trees), up to 36 genetic clusters were detected. Morphological and DNA data were congruent, except for three species (Aspidiotus nerii, Hemiberlesia rapax and Coccus hesperidum) in which DNA data revealed highly differentiated lineages. More than 50% of the haplotypes obtained had no high-scoring matches with any of the sequences in the GenBank database. This study provides 63 COI and 54 28S barcode sequences for the identification of Coccoidea from Chile.
Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration
Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...
2016-02-11
A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less
A Framework for Spatial Interaction Analysis Based on Large-Scale Mobile Phone Data
Li, Weifeng; Cheng, Xiaoyun; Guo, Gaohua
2014-01-01
The overall understanding of spatial interaction and the exact knowledge of its dynamic evolution are required in the urban planning and transportation planning. This study aimed to analyze the spatial interaction based on the large-scale mobile phone data. The newly arisen mass dataset required a new methodology which was compatible with its peculiar characteristics. A three-stage framework was proposed in this paper, including data preprocessing, critical activity identification, and spatial interaction measurement. The proposed framework introduced the frequent pattern mining and measured the spatial interaction by the obtained association. A case study of three communities in Shanghai was carried out as verification of proposed method and demonstration of its practical application. The spatial interaction patterns and the representative features proved the rationality of the proposed framework. PMID:25435865
miRNAFold: a web server for fast miRNA precursor prediction in genomes.
Tav, Christophe; Tempel, Sébastien; Poligny, Laurent; Tahi, Fariza
2016-07-08
Computational methods are required for prediction of non-coding RNAs (ncRNAs), which are involved in many biological processes, especially at post-transcriptional level. Among these ncRNAs, miRNAs have been largely studied and biologists need efficient and fast tools for their identification. In particular, ab initio methods are usually required when predicting novel miRNAs. Here we present a web server dedicated for miRNA precursors identification at a large scale in genomes. It is based on an algorithm called miRNAFold that allows predicting miRNA hairpin structures quickly with high sensitivity. miRNAFold is implemented as a web server with an intuitive and user-friendly interface, as well as a standalone version. The web server is freely available at: http://EvryRNA.ibisc.univ-evry.fr/miRNAFold. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
ERIC Educational Resources Information Center
Benediktsson, Michael Owen
2010-01-01
What role do the media play in the identification and construction of white-collar crimes? Few studies have examined media coverage of corporate deviance. This study investigates news coverage of six large-scale accounting scandals that broke in 2001 and 2002. Using a variety of empirical methods to analyze the 51 largest U.S. newspapers, the…
Vullo, Carlos M; Romero, Magdalena; Catelli, Laura; Šakić, Mustafa; Saragoni, Victor G; Jimenez Pleguezuelos, María Jose; Romanini, Carola; Anjos Porto, Maria João; Puente Prieto, Jorge; Bofarull Castro, Alicia; Hernandez, Alexis; Farfán, María José; Prieto, Victoria; Alvarez, David; Penacino, Gustavo; Zabalza, Santiago; Hernández Bolaños, Alejandro; Miguel Manterola, Irati; Prieto, Lourdes; Parsons, Thomas
2016-03-01
The GHEP-ISFG Working Group has recognized the importance of assisting DNA laboratories to gain expertise in handling DVI or missing persons identification (MPI) projects which involve the need for large-scale genetic profile comparisons. Eleven laboratories participated in a DNA matching exercise to identify victims from a hypothetical conflict with 193 missing persons. The post mortem database was comprised of 87 skeletal remain profiles from a secondary mass grave displaying a minimal number of 58 individuals with evidence of commingling. The reference database was represented by 286 family reference profiles with diverse pedigrees. The goal of the exercise was to correctly discover re-associations and family matches. The results of direct matching for commingled remains re-associations were correct and fully concordant among all laboratories. However, the kinship analysis for missing persons identifications showed variable results among the participants. There was a group of laboratories with correct, concordant results but nearly half of the others showed discrepant results exhibiting likelihood ratio differences of several degrees of magnitude in some cases. Three main errors were detected: (a) some laboratories did not use the complete reference family genetic data to report the match with the remains, (b) the identity and/or non-identity hypotheses were sometimes wrongly expressed in the likelihood ratio calculations, and (c) many laboratories did not properly evaluate the prior odds for the event. The results suggest that large-scale profile comparisons for DVI or MPI is a challenge for forensic genetics laboratories and the statistical treatment of DNA matching and the Bayesian framework should be better standardized among laboratories. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Vilar, Santiago; Hripcsak, George
2016-01-01
Drug-target identification is crucial to discover novel applications for existing drugs and provide more insights about mechanisms of biological actions, such as adverse drug effects (ADEs). Computational methods along with the integration of current big data sources provide a useful framework for drug-target and drug-adverse effect discovery. In this article, we propose a method based on the integration of 3D chemical similarity, target and adverse effect data to generate a drug-target-adverse effect predictor along with a simple leveraging system to improve identification of drug-targets and drug-adverse effects. In the first step, we generated a system for multiple drug-target identification based on the application of 3D drug similarity into a large target dataset extracted from the ChEMBL. Next, we developed a target-adverse effect predictor combining targets from ChEMBL with phenotypic information provided by SIDER data source. Both modules were linked to generate a final predictor that establishes hypothesis about new drug-target-adverse effect candidates. Additionally, we showed that leveraging drug-target candidates with phenotypic data is very useful to improve the identification of drug-targets. The integration of phenotypic data into drug-target candidates yielded up to twofold precision improvement. In the opposite direction, leveraging drug-phenotype candidates with target data also yielded a significant enhancement in the performance. The modeling described in the current study is simple and efficient and has applications at large scale in drug repurposing and drug safety through the identification of mechanism of action of biological effects.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
NASA Astrophysics Data System (ADS)
Wang, Ke; Testi, Leonardo; Burkert, Andreas; Walmsley, C. Malcolm; Beuther, Henrik; Henning, Thomas
2016-09-01
Large-scale gaseous filaments with lengths up to the order of 100 pc are on the upper end of the filamentary hierarchy of the Galactic interstellar medium (ISM). Their association with respect to the Galactic structure and their role in Galactic star formation are of great interest from both an observational and theoretical point of view. Previous “by-eye” searches, combined together, have started to uncover the Galactic distribution of large filaments, yet inherent bias and small sample size limit conclusive statistical results from being drawn. Here, we present (1) a new, automated method for identifying large-scale velocity-coherent dense filaments, and (2) the first statistics and the Galactic distribution of these filaments. We use a customized minimum spanning tree algorithm to identify filaments by connecting voxels in the position-position-velocity space, using the Bolocam Galactic Plane Survey spectroscopic catalog. In the range of 7\\buildrel{\\circ}\\over{.} 5≤slant l≤slant 194^\\circ , we have identified 54 large-scale filaments and derived mass (˜ {10}3{--}{10}5 {M}⊙ ), length (10-276 pc), linear mass density (54-8625 {M}⊙ pc-1), aspect ratio, linearity, velocity gradient, temperature, fragmentation, Galactic location, and orientation angle. The filaments concentrate along major spiral arms. They are widely distributed across the Galactic disk, with 50% located within ±20 pc from the Galactic mid-plane and 27% run in the center of spiral arms. An order of 1% of the molecular ISM is confined in large filaments. Massive star formation is more favorable in large filaments compared to elsewhere. This is the first comprehensive catalog of large filaments that can be useful for a quantitative comparison with spiral structures and numerical simulations.
Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent
2009-05-01
Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.
Implementation of the Agitated Behavior Scale in the Electronic Health Record.
Wilson, Helen John; Dasgupta, Kritis; Michael, Kathleen
The purpose of the study was to implement an Agitated Behavior Scale through an electronic health record and to evaluate the usability of the scale in a brain injury unit at a rehabilitation hospital. A quality improvement project was conducted in the brain injury unit at a large rehabilitation hospital with registered nurses as participants using convenience sampling. The project consisted of three phases and included education, implementation of the scale in the electronic health record, and administration of the survey questionnaire, which utilized the system usability scale. The Agitated Behavior Scale was found to be usable, and there was 92.2% compliance with the use of the electronic Electronic Agitated Behavior Scale. The Agitated Behavior Scale was effectively implemented in the electronic health record and was found to be usable in the assessment of agitation. Utilization of the scale through the electronic health record on a daily basis will allow for an early identification of agitation in patients with traumatic brain injury and enable prompt interventions to manage agitation.
Estimating the reliability of eyewitness identifications from police lineups
Wixted, John T.; Mickes, Laura; Dunn, John C.; Clark, Steven E.; Wells, William
2016-01-01
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure. PMID:26699467
Estimating the reliability of eyewitness identifications from police lineups.
Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William
2016-01-12
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.
Smith, B. Eugene; Johnston, Mark K.; Lücking, Robert
2016-01-01
Accuracy of taxonomic identifications is crucial to data quality in online repositories of species occurrence data, such as the Global Biodiversity Information Facility (GBIF), which have accumulated several hundred million records over the past 15 years. These data serve as basis for large scale analyses of macroecological and biogeographic patterns and to document environmental changes over time. However, taxonomic identifications are often unreliable, especially for non-vascular plants and fungi including lichens, which may lack critical revisions of voucher specimens. Due to the scale of the problem, restudy of millions of collections is unrealistic and other strategies are needed. Here we propose to use verified, georeferenced occurrence data of a given species to apply predictive niche modeling that can then be used to evaluate unverified occurrences of that species. Selecting the charismatic lichen fungus, Usnea longissima, as a case study, we used georeferenced occurrence records based on sequenced specimens to model its predicted niche. Our results suggest that the target species is largely restricted to a narrow range of boreal and temperate forest in the Northern Hemisphere and that occurrence records in GBIF from tropical regions and the Southern Hemisphere do not represent this taxon, a prediction tested by comparison with taxonomic revisions of Usnea for these regions. As a novel approach, we employed Principal Component Analysis on the environmental grid data used for predictive modeling to visualize potential ecogeographical barriers for the target species; we found that tropical regions conform a strong barrier, explaining why potential niches in the Southern Hemisphere were not colonized by Usnea longissima and instead by morphologically similar species. This approach is an example of how data from two of the most important biodiversity repositories, GenBank and GBIF, can be effectively combined to remotely address the problem of inaccuracy of taxonomic identifications in occurrence data repositories and to provide a filtering mechanism which can considerably reduce the number of voucher specimens that need critical revision, in this case from 4,672 to about 100. PMID:26967999
Smith, B Eugene; Johnston, Mark K; Lücking, Robert
2016-01-01
Accuracy of taxonomic identifications is crucial to data quality in online repositories of species occurrence data, such as the Global Biodiversity Information Facility (GBIF), which have accumulated several hundred million records over the past 15 years. These data serve as basis for large scale analyses of macroecological and biogeographic patterns and to document environmental changes over time. However, taxonomic identifications are often unreliable, especially for non-vascular plants and fungi including lichens, which may lack critical revisions of voucher specimens. Due to the scale of the problem, restudy of millions of collections is unrealistic and other strategies are needed. Here we propose to use verified, georeferenced occurrence data of a given species to apply predictive niche modeling that can then be used to evaluate unverified occurrences of that species. Selecting the charismatic lichen fungus, Usnea longissima, as a case study, we used georeferenced occurrence records based on sequenced specimens to model its predicted niche. Our results suggest that the target species is largely restricted to a narrow range of boreal and temperate forest in the Northern Hemisphere and that occurrence records in GBIF from tropical regions and the Southern Hemisphere do not represent this taxon, a prediction tested by comparison with taxonomic revisions of Usnea for these regions. As a novel approach, we employed Principal Component Analysis on the environmental grid data used for predictive modeling to visualize potential ecogeographical barriers for the target species; we found that tropical regions conform a strong barrier, explaining why potential niches in the Southern Hemisphere were not colonized by Usnea longissima and instead by morphologically similar species. This approach is an example of how data from two of the most important biodiversity repositories, GenBank and GBIF, can be effectively combined to remotely address the problem of inaccuracy of taxonomic identifications in occurrence data repositories and to provide a filtering mechanism which can considerably reduce the number of voucher specimens that need critical revision, in this case from 4,672 to about 100.
Model and Data Reduction for Control, Identification and Compressed Sensing
NASA Astrophysics Data System (ADS)
Kramer, Boris
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
Parmain, G; Bouget, C; Müller, J; Horak, J; Gossner, M M; Lachat, T; Isacsson, G
2015-02-01
Monitoring saproxylic beetle diversity, though challenging, can help identifying relevant conservation sites or key drivers of forest biodiversity, and assessing the impact of forestry practices on biodiversity. Unfortunately, monitoring species assemblages is costly, mainly due to the time spent on identification. Excluding families which are rich in specimens and species but are difficult to identify is a frequent procedure used in ecological entomology to reduce the identification cost. The Staphylinidae (rove beetle) family is both one of the most frequently excluded and one of the most species-rich saproxylic beetle families. Using a large-scale beetle and environmental dataset from 238 beech stands across Europe, we evaluated the effects of staphylinid exclusion on results in ecological forest studies. Simplified staphylinid-excluded assemblages were found to be relevant surrogates for whole assemblages. The species richness and composition of saproxylic beetle assemblages both with and without staphylinids responded congruently to landscape, climatic and stand gradients, even when the assemblages included a high proportion of staphylinid species. At both local and regional scales, the species richness as well as the species composition of staphylinid-included and staphylinid-excluded assemblages were highly positively correlated. Ranking of sites according to their biodiversity level, which either included or excluded Staphylinidae in species richness, also gave congruent results. From our results, species assemblages omitting staphylinids can be taken as efficient surrogates for complete assemblages in large scale biodiversity monitoring studies.
Development of a New Marker System for Identification of Spirodela polyrhiza and Landoltia punctata
Feng, Bo; Fang, Yang; Xu, Zhibin; Xiang, Chao; Zhou, Chunhong; Jiang, Fei; Wang, Tao
2017-01-01
Lemnaceae (commonly called duckweed) is an aquatic plant ideal for quantitative analysis in plant sciences. Several species of this family represent the smallest and fastest growing flowering plants. Different ecotypes of the same species vary in their biochemical and physiological properties. Thus, selecting of desirable ecotypes of a species is very important. Here, we developed a simple and rapid molecular identification system for Spirodela polyrhiza and Landoltia punctata based on the sequence polymorphism. First, several pairs of primers were designed and three markers were selected as good for identification. After PCR amplification, DNA fragments (the combination of three PCR products) in different duckweeds were detected using capillary electrophoresis. The high-resolution capillary electrophoresis displayed high identity to the sequencing results. The combination of the PCR products containing several DNA fragments highly improved the identification frequency. These results indicate that this method is not only good for interspecies identification but also ideal for intraspecies distinguishing. Meanwhile, 11 haplotypes were found in both the S. polyrhiza and L. punctata ecotypes. The results suggest that this marker system is useful for large-scale identification of duckweed and for the screening of desirable ecotypes to improve the diverse usage in duckweed utilization. PMID:28168191
Logue, Mark W; Amstadter, Ananda B; Baker, Dewleen G; Duncan, Laramie; Koenen, Karestan C; Liberzon, Israel; Miller, Mark W; Morey, Rajendra A; Nievergelt, Caroline M; Ressler, Kerry J; Smith, Alicia K; Smoller, Jordan W; Stein, Murray B; Sumner, Jennifer A; Uddin, Monica
2015-01-01
The development of posttraumatic stress disorder (PTSD) is influenced by genetic factors. Although there have been some replicated candidates, the identification of risk variants for PTSD has lagged behind genetic research of other psychiatric disorders such as schizophrenia, autism, and bipolar disorder. Psychiatric genetics has moved beyond examination of specific candidate genes in favor of the genome-wide association study (GWAS) strategy of very large numbers of samples, which allows for the discovery of previously unsuspected genes and molecular pathways. The successes of genetic studies of schizophrenia and bipolar disorder have been aided by the formation of a large-scale GWAS consortium: the Psychiatric Genomics Consortium (PGC). In contrast, only a handful of GWAS of PTSD have appeared in the literature to date. Here we describe the formation of a group dedicated to large-scale study of PTSD genetics: the PGC-PTSD. The PGC-PTSD faces challenges related to the contingency on trauma exposure and the large degree of ancestral genetic diversity within and across participating studies. Using the PGC analysis pipeline supplemented by analyses tailored to address these challenges, we anticipate that our first large-scale GWAS of PTSD will comprise over 10 000 cases and 30 000 trauma-exposed controls. Following in the footsteps of our PGC forerunners, this collaboration—of a scope that is unprecedented in the field of traumatic stress—will lead the search for replicable genetic associations and new insights into the biological underpinnings of PTSD. PMID:25904361
Two Scales for the Measurement of Mexican-American Identity.
ERIC Educational Resources Information Center
Teske, Raymond, Jr.; Nelson, Bardin H.
The development of scales to measure Mexican American identification with their population is discussed in this paper. The scales measure (1) identification with the Mexican American population using attitudinal items (Identity Scale) and (2) interaction behavior with the Mexican American population (Interaction Scale). The sample consisted of all…
Jin, Feng Jie; Takahashi, Tadashi; Machida, Masayuki; Koyama, Yasuji
2009-09-01
We previously developed two methods (loop-out and replacement-type recombination) for generating large-scale chromosomal deletions that can be applied to more effective chromosomal engineering in Aspergillus oryzae. In this study, the replacement-type method is used to systematically delete large chromosomal DNA segments to identify essential and nonessential regions in chromosome 7 (2.93 Mb), which is the smallest A. oryzae chromosome and contains a large number of nonsyntenic blocks. We constructed 12 mutants harboring deletions that spanned 16- to 150-kb segments of chromosome 7 and scored phenotypic changes in the resulting mutants. Among the deletion mutants, strains designated Delta5 and Delta7 displayed clear phenotypic changes involving growth and conidiation. In particular, the Delta5 mutant exhibited vigorous growth and conidiation, potentially beneficial characteristics for certain industrial applications. Further deletion analysis allowed identification of the AO090011000215 gene as the gene responsible for the Delta5 mutant phenotype. The AO090011000215 gene was predicted to encode a helix-loop-helix binding protein belonging to the bHLH family of transcription factors. These results illustrate the potential of the approach for identifying novel functional genes.
VHSIC Electronics and the Cost of Air Force Avionics in the 1990s
1990-11-01
circuit. LRM Line replaceable module. LRU Line replaceable unit. LSI Large-scale integration. LSTTL Tow-power Schottky Transitor -to-Transistor Logic...displays, communications/navigation/identification, electronic combat equipment, dispensers, and computers. These CERs, which statistically relate the...some of the reliability numbers, and adding the F-15 and F-16 to obtain the data sample shown in Table 6. Both suite costs and reliability statistics
Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M
2002-07-01
Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.
The opportunities and challenges of large-scale molecular approaches to songbird neurobiology
Mello, C.V.; Clayton, D.F.
2014-01-01
High-through put methods for analyzing genome structure and function are having a large impact in song-bird neurobiology. Methods include genome sequencing and annotation, comparative genomics, DNA microarrays and transcriptomics, and the development of a brain atlas of gene expression. Key emerging findings include the identification of complex transcriptional programs active during singing, the robust brain expression of non-coding RNAs, evidence of profound variations in gene expression across brain regions, and the identification of molecular specializations within song production and learning circuits. Current challenges include the statistical analysis of large datasets, effective genome curations, the efficient localization of gene expression changes to specific neuronal circuits and cells, and the dissection of behavioral and environmental factors that influence brain gene expression. The field requires efficient methods for comparisons with organisms like chicken, which offer important anatomical, functional and behavioral contrasts. As sequencing costs plummet, opportunities emerge for comparative approaches that may help reveal evolutionary transitions contributing to vocal learning, social behavior and other properties that make songbirds such compelling research subjects. PMID:25280907
Continental-scale patterns of canopy tree composition and function across Amazonia.
ter Steege, Hans; Pitman, Nigel C A; Phillips, Oliver L; Chave, Jerome; Sabatier, Daniel; Duque, Alvaro; Molino, Jean-François; Prévost, Marie-Françoise; Spichiger, Rodolphe; Castellanos, Hernán; von Hildebrand, Patricio; Vásquez, Rodolfo
2006-09-28
The world's greatest terrestrial stores of biodiversity and carbon are found in the forests of northern South America, where large-scale biogeographic patterns and processes have recently begun to be described. Seven of the nine countries with territory in the Amazon basin and the Guiana shield have carried out large-scale forest inventories, but such massive data sets have been little exploited by tropical plant ecologists. Although forest inventories often lack the species-level identifications favoured by tropical plant ecologists, their consistency of measurement and vast spatial coverage make them ideally suited for numerical analyses at large scales, and a valuable resource to describe the still poorly understood spatial variation of biomass, diversity, community composition and forest functioning across the South American tropics. Here we show, by using the seven forest inventories complemented with trait and inventory data collected elsewhere, two dominant gradients in tree composition and function across the Amazon, one paralleling a major gradient in soil fertility and the other paralleling a gradient in dry season length. The data set also indicates that the dominance of Fabaceae in the Guiana shield is not necessarily the result of root adaptations to poor soils (nodulation or ectomycorrhizal associations) but perhaps also the result of their remarkably high seed mass there as a potential adaptation to low rates of disturbance.
Continental-scale patterns of canopy tree composition and function across Amazonia
NASA Astrophysics Data System (ADS)
Ter Steege, Hans; Pitman, Nigel C. A.; Phillips, Oliver L.; Chave, Jerome; Sabatier, Daniel; Duque, Alvaro; Molino, Jean-François; Prévost, Marie-Françoise; Spichiger, Rodolphe; Castellanos, Hernán; von Hildebrand, Patricio; Vásquez, Rodolfo
2006-09-01
The world's greatest terrestrial stores of biodiversity and carbon are found in the forests of northern South America, where large-scale biogeographic patterns and processes have recently begun to be described. Seven of the nine countries with territory in the Amazon basin and the Guiana shield have carried out large-scale forest inventories, but such massive data sets have been little exploited by tropical plant ecologists. Although forest inventories often lack the species-level identifications favoured by tropical plant ecologists, their consistency of measurement and vast spatial coverage make them ideally suited for numerical analyses at large scales, and a valuable resource to describe the still poorly understood spatial variation of biomass, diversity, community composition and forest functioning across the South American tropics. Here we show, by using the seven forest inventories complemented with trait and inventory data collected elsewhere, two dominant gradients in tree composition and function across the Amazon, one paralleling a major gradient in soil fertility and the other paralleling a gradient in dry season length. The data set also indicates that the dominance of Fabaceae in the Guiana shield is not necessarily the result of root adaptations to poor soils (nodulation or ectomycorrhizal associations) but perhaps also the result of their remarkably high seed mass there as a potential adaptation to low rates of disturbance.
Formulating a subgrid-scale breakup model for microbubble generation from interfacial collisions
NASA Astrophysics Data System (ADS)
Chan, Wai Hong Ronald; Mirjalili, Shahab; Urzay, Javier; Mani, Ali; Moin, Parviz
2017-11-01
Multiphase flows often involve impact events that engender important effects like the generation of a myriad of tiny bubbles that are subsequently transported in large liquid bodies. These impact events are created by large-scale phenomena like breaking waves on ocean surfaces, and often involve the relative approach of liquid surfaces. This relative motion generates continuously shrinking length scales as the entrapped gas layer thins and eventually breaks up into microbubbles. The treatment of this disparity in length scales is computationally challenging. In this presentation, a framework is presented that addresses a subgrid-scale (SGS) model aimed at capturing the process of microbubble generation. This work sets up the components in an overarching volume-of-fluid (VoF) toolset and investigates the analytical foundations of an SGS model for describing the breakup of a thin air film trapped between two approaching water bodies in a physical regime corresponding to Mesler entrainment. Constituents of the SGS model, such as the identification of impact events and the accurate computation of the local characteristic curvature in a VoF-based architecture, and the treatment of the air layer breakup, are discussed and illustrated in simplified scenarios. Supported by Office of Naval Research (ONR)/A*STAR (Singapore).
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets.
Savitski, Mikhail M; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-09-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target-decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target-decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The "picked" protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The "picked" target-decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used "classic" protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets
Savitski, Mikhail M.; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-01-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target–decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target–decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The “picked” protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The “picked” target–decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used “classic” protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. PMID:25987413
[Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].
Uesawa, Yoshihiro
2018-01-01
Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.
The Middle Miocene of the Fore-Carpathian Basin (Poland, Ukraine and Moldova)
NASA Astrophysics Data System (ADS)
Wysocka, Anna; Radwański, Andrzej; Górka, Marcin; Bąbel, Maciej; Radwańska, Urszula; Złotnik, Michał
2016-09-01
Studies of Miocene sediments in the Fore-Carpathian Basin, conducted by geologists from the University of Warsaw have provided new insights on the distribution of the facies infilling the basin, particularly in the forebulge and back-bulge zones. The origin of the large-scale sand bodies, evaporitic deposits and large-scale organic buildups is discussed, described and verified. These deposits originated in variable, shallow marine settings, differing in their water chemistry and the dynamics of sedimentary processes, and are unique with regard to the fossil assemblages they yield. Many years of taxonomic, biostratigraphic, palaeoecologic and ecotaphonomic investigations have resulted in the identification of the fossil assemblages of these sediments, their age, sedimentary settings and post-mortem conditions. Detailed studies were focused on corals, polychaetes, most classes of molluscs, crustaceans, echinoderms, and fishes.
Large-eddy simulation of turbulent flow with a surface-mounted two-dimensional obstacle
NASA Technical Reports Server (NTRS)
Yang, Kyung-Soo; Ferziger, Joel H.
1993-01-01
In this paper, we perform a large eddy simulation (LES) of turbulent flow in a channel containing a two-dimensional obstacle on one wall using a dynamic subgrid-scale model (DSGSM) at Re = 3210, based on bulk velocity above the obstacle and obstacle height; the wall layers are fully resolved. The low Re enables us to perform a DNS (Case 1) against which to validate the LES results. The LES with the DSGSM is designated Case 2. In addition, an LES with the conventional fixed model constant (Case 3) is conducted to allow identification of improvements due to the DSGSM. We also include LES at Re = 82,000 (Case 4) using conventional Smagorinsky subgrid-scale model and a wall-layer model. The results will be compared with the experiment of Dimaczek et al.
Exploration for fossil and nuclear fuels from orbital altitudes
NASA Technical Reports Server (NTRS)
Short, N. M.
1975-01-01
A review of satellite-based photographic (optical and infrared) and microwave exploration and large-area mapping of the earth's surface in the ERTS program. Synoptic cloud-free coverage of large areas has been achieved with planimetric vertical views of the earth's surface useful in compiling close-to-orthographic mosaics. Radar penetration of cloud cover and infrared penetration of forest cover have been successful to some extent. Geological applications include map editing (with corrections in scale and computer processing of images), landforms analysis, structural geology studies, lithological identification, and exploration for minerals and fuels. Limitations of the method are noted.
Olfactory Performance in a Large Sample of Early-Blind and Late-Blind Individuals.
Sorokowska, Agnieszka
2016-10-01
Previous examinations of olfactory sensitivity in blind people have produced contradictory findings. Thus, whether visual impairment is associated with increased olfactory abilities is unclear. In the present investigation, I aimed to resolve the existing questions via a relatively large-scale study comprising early-blind (N = 43), and late-blind (N = 41) and sighted (N = 84) individuals matched in terms of gender and age. To compare the results with those of previous studies, I combined data from a free odor identification test, extensive psychophysical testing (Sniffin' Sticks test), and self-assessed olfactory performance. The analyses revealed no significant effects of sight on olfactory threshold, odor discrimination, cued identification, or free identification scores; neither was the performance of the early-blind and late-blind participants significantly different. Additionally, the self-assessed olfactory abilities of the blind people were no different than those of the sighted people. These results suggest that sensory compensation in visually impaired is not pronounced with regards to olfactory abilities as measured by standardized smell tests. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Application of RNAMlet to surface defect identification of steels
NASA Astrophysics Data System (ADS)
Xu, Ke; Xu, Yang; Zhou, Peng; Wang, Lei
2018-06-01
As three main production lines of steels, continuous casting slabs, hot rolled steel plates and cold rolled steel strips have different surface appearances and are produced at different speeds of their production lines. Therefore, the algorithms for the surface defect identifications of the three steel products have different requirements for real-time and anti-interference. The existing algorithms cannot be adaptively applied to surface defect identification of the three steel products. A new method of adaptive multi-scale geometric analysis named RNAMlet was proposed. The idea of RNAMlet came from the non-symmetry anti-packing pattern representation model (NAM). The image is decomposed into a set of rectangular blocks asymmetrically according to gray value changes of image pixels. Then two-dimensional Haar wavelet transform is applied to all blocks. If the image background is complex, the number of blocks is large, and more details of the image are utilized. If the image background is simple, the number of blocks is small, and less computation time is needed. RNAMlet was tested with image samples of the three steel products, and compared with three classical methods of multi-scale geometric analysis, including Contourlet, Shearlet and Tetrolet. For the image samples with complicated backgrounds, such as continuous casting slabs and hot rolled steel plates, the defect identification rate obtained by RNAMlet was 1% higher than other three methods. For the image samples with simple backgrounds, such as cold rolled steel strips, the computation time of RNAMlet was one-tenth of the other three MGA methods, while the defect identification rates obtained by RNAMlet were higher than the other three methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritsenko, Marina A.; Xu, Zhe; Liu, Tao
Comprehensive, quantitative information on abundances of proteins and their post-translational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labelling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification andmore » quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples, and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.« less
Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D
2016-01-01
Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.
NASA Technical Reports Server (NTRS)
Thomas, Randy; Stueber, Thomas J.
2013-01-01
The System Identification (SysID) Rack is a real-time hardware-in-the-loop data acquisition (DAQ) and control instrument rack that was designed and built to support inlet testing in the NASA Glenn Research Center 10- by 10-Foot Supersonic Wind Tunnel. This instrument rack is used to support experiments on the Combined-Cycle Engine Large-Scale Inlet for Mode Transition Experiment (CCE? LIMX). The CCE?LIMX is a testbed for an integrated dual flow-path inlet configuration with the two flow paths in an over-and-under arrangement such that the high-speed flow path is located below the lowspeed flow path. The CCE?LIMX includes multiple actuators that are designed to redirect airflow from one flow path to the other; this action is referred to as "inlet mode transition." Multiple phases of experiments have been planned to support research that investigates inlet mode transition: inlet characterization (Phase-1) and system identification (Phase-2). The SysID Rack hardware design met the following requirements to support Phase-1 and Phase-2 experiments: safely and effectively move multiple actuators individually or synchronously; sample and save effector control and position sensor feedback signals; automate control of actuator positioning based on a mode transition schedule; sample and save pressure sensor signals; and perform DAQ and control processes operating at 2.5 KHz. This document describes the hardware components used to build the SysID Rack including their function, specifications, and system interface. Furthermore, provided in this document are a SysID Rack effectors signal list (signal flow); system identification experiment setup; illustrations indicating a typical SysID Rack experiment; and a SysID Rack performance overview for Phase-1 and Phase-2 experiments. The SysID Rack described in this document was a useful tool to meet the project objectives.
Lopes, Anne; Sacquin-Mora, Sophie; Dimitrova, Viktoriya; Laine, Elodie; Ponty, Yann; Carbone, Alessandra
2013-01-01
Large-scale analyses of protein-protein interactions based on coarse-grain molecular docking simulations and binding site predictions resulting from evolutionary sequence analysis, are possible and realizable on hundreds of proteins with variate structures and interfaces. We demonstrated this on the 168 proteins of the Mintseris Benchmark 2.0. On the one hand, we evaluated the quality of the interaction signal and the contribution of docking information compared to evolutionary information showing that the combination of the two improves partner identification. On the other hand, since protein interactions usually occur in crowded environments with several competing partners, we realized a thorough analysis of the interactions of proteins with true partners but also with non-partners to evaluate whether proteins in the environment, competing with the true partner, affect its identification. We found three populations of proteins: strongly competing, never competing, and interacting with different levels of strength. Populations and levels of strength are numerically characterized and provide a signature for the behavior of a protein in the crowded environment. We showed that partner identification, to some extent, does not depend on the competing partners present in the environment, that certain biochemical classes of proteins are intrinsically easier to analyze than others, and that small proteins are not more promiscuous than large ones. Our approach brings to light that the knowledge of the binding site can be used to reduce the high computational cost of docking simulations with no consequence in the quality of the results, demonstrating the possibility to apply coarse-grain docking to datasets made of thousands of proteins. Comparison with all available large-scale analyses aimed to partner predictions is realized. We release the complete decoys set issued by coarse-grain docking simulations of both true and false interacting partners, and their evolutionary sequence analysis leading to binding site predictions. Download site: http://www.lgm.upmc.fr/CCDMintseris/ PMID:24339765
Adaptive Identification and Characterization of Polar Ionization Patches
NASA Technical Reports Server (NTRS)
Coley, W. R.; Heelis, R. A.
1995-01-01
Dynamics Explorer 2 (DE 2) spacecraft data are used to detect and characterize polar cap 'ionization patches' loosely defined as large-scale (greater than 100 km) regions where the F region plasma density is significantly enhanced (approx greater than 100%) above the background level. These patches are generally believed to develop in or equatorward of the dayside cusp region and then drift in an antisunward direction over the polar cap. We have developed a flexible algorithm for the identification and characterization of these structures, as a function of scale-size and density enhancement, using data from the retarding potential analyzer, the ion drift meter, and the langmuir probe on board the DE 2 satellite. This algorithm was used to study the structure and evolution of ionization patches as they cross the polar cap. The results indicate that in the altitude region from 240 to 950 km ion density enhancements greater than a factor of 3 above the background level are relatively rare. Further, the ionization patches show a preferred horizontal scale size of 300-400 km. There exists a clear seasonal and universal time dependence to the occurrence frequency of patches with a northern hemisphere maximum centered on the winter solstice and the 1200-2000 UT interval.
Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro
2013-05-21
We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.
49 CFR 178.905 - Large Packaging identification codes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Large Packaging identification codes. 178.905... FOR PACKAGINGS Large Packagings Standards § 178.905 Large Packaging identification codes. Large packaging code designations consist of: two numerals specified in paragraph (a) of this section; followed by...
Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming
2013-07-18
Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.
Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming
2013-01-01
Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes. PMID:23873409
Assessing Bodily Preoccupations is sufficient: clinically effective screening for hypochondriasis.
Höfling, Volkmar; Weck, Florian
2013-12-01
Hypochondriasis is a persistent psychiatric disorder and is associated with increased utilisation of health care services. However, effective psychiatric consultation interventions and CBT treatments are available. In the present study, we provide evidence of clinically effective screening for hypochondriasis. We describe the clinically effective identification of patients with a high probability of suffering from hypochondriasis. This identification is achieved by means of two brief standardised screening instruments, namely the Bodily Preoccupation (BP) Scale with 3 items and the Whiteley-7 (WI-7) with 7 items. Both the BP scale and the WI-7 were examined in a sample of 228 participants (72 with hypochondriasis, 80 with anxiety disorders and 76 healthy controls) in a large psychotherapy outpatients' unit, applying the DSM-IV criteria. Cut-off values for the BP scale and the WI-7 were computed to identify patients with a high probability of suffering from hypochondriasis. Additionally, other self-report symptom severity scales were completed in order to examine discriminant and convergent validity. Data was collected from June 2010 to March 2013. The BP scale and the WI-7 discriminated significantly between patients with hypochondriasis and those with an anxiety disorder (d=2.42 and d=2.34). Cut-off values for these two screening scales could be provided, thus identifying patients with a high probability of suffering from hypochondriasis. In order to reduce costs, the BP scale or the WI-7 should be applied in medical or primary care settings, to screen for patients with a high probability of hypochondriasis and to transfer them to further assessment and effective treatment. © 2013.
Identification of possible non-stationary effects in a new type of vortex furnace
NASA Astrophysics Data System (ADS)
Shadrin, Evgeniy Yu.; Anufriev, Igor S.; Papulov, Anatoly P.
2017-10-01
The article presents the results of an experimental study of pressure and velocity pulsations in the model of improved vortex furnace with distributed air supply and vertically oriented nozzles of the secondary blast. Investigation of aerodynamic characteristics of a swirling flow with different regime parameters was conducted in an isothermal laboratory model (in 1:25 scale) of vortex furnace using laser Doppler measuring system and pressure pulsations analyzer. The obtained results have revealed a number of features of the flow structure, and the spectral analysis of pressure and velocity pulsations allows to speak about the absence of large-scale unsteady vortical structures in the studied design.
NASA Technical Reports Server (NTRS)
Coberly, W. A.; Tubbs, J. D.; Odell, P. L.
1979-01-01
The overall success of large-scale crop inventories of agricultural regions using Landsat multispectral scanner data is highly dependent upon the labeling of training data by analyst/photointerpreters. The principal analyst tool in labeling training data is a false color infrared composite of Landsat bands 4, 5, and 7. In this paper, this color display is investigated and its influence upon classification errors is partially determined.
Large-scale sequencing efforts are uncovering the complexity of cancer genomes, which are composed of causal "driver" mutations that promote tumor progression along with many more pathologically neutral "passenger" events. The majority of mutations, both in known cancer drivers and uncharacterized genes, are generally of low occurrence, highlighting the need to functionally annotate the long tail of infrequent mutations present in heterogeneous cancers.
Large Scale Single Nucleotide Polymorphism Study of PD Susceptibility
2006-03-01
familial PD, the results of intensive investigations of polymorphisms in dozens of genes related to sporadic, late onset, typical PD have not shown...association between classical, sporadic PD and 2386 SNPs in 23 genes implicated in the pathogenesis of PD; (2) construct haplotypes based on the SNP...derived from this study may be applied in other complex disorders for the identification of susceptibility genes , as well as in genome-wide SNP
Poland, Jesse A; Nelson, Rebecca J
2011-02-01
The agronomic importance of developing durably resistant cultivars has led to substantial research in the field of quantitative disease resistance (QDR) and, in particular, mapping quantitative trait loci (QTL) for disease resistance. The assessment of QDR is typically conducted by visual estimation of disease severity, which raises concern over the accuracy and precision of visual estimates. Although previous studies have examined the factors affecting the accuracy and precision of visual disease assessment in relation to the true value of disease severity, the impact of this variability on the identification of disease resistance QTL has not been assessed. In this study, the effects of rater variability and rating scales on mapping QTL for northern leaf blight resistance in maize were evaluated in a recombinant inbred line population grown under field conditions. The population of 191 lines was evaluated by 22 different raters using a direct percentage estimate, a 0-to-9 ordinal rating scale, or both. It was found that more experienced raters had higher precision and that using a direct percentage estimation of diseased leaf area produced higher precision than using an ordinal scale. QTL mapping was then conducted using the disease estimates from each rater using stepwise general linear model selection (GLM) and inclusive composite interval mapping (ICIM). For GLM, the same QTL were largely found across raters, though some QTL were only identified by a subset of raters. The magnitudes of estimated allele effects at identified QTL varied drastically, sometimes by as much as threefold. ICIM produced highly consistent results across raters and for the different rating scales in identifying the location of QTL. We conclude that, despite variability between raters, the identification of QTL was largely consistent among raters, particularly when using ICIM. However, care should be taken in estimating QTL allele effects, because this was highly variable and rater dependent.
DNA barcodes reveal microevolutionary signals in fire response trait in two legume genera
Bello, Abubakar; Daru, Barnabas H.; Stirton, Charles H.; Chimphango, Samson B. M.; van der Bank, Michelle; Maurin, Olivier; Muasya, A. Muthama
2015-01-01
Large-scale DNA barcoding provides a new technique for species identification and evaluation of relationships across various levels (populations and species) and may reveal fundamental processes in recently diverged species. Here, we analysed DNA sequence variation in the recently diverged legumes from the Psoraleeae (Fabaceae) occurring in the Cape Floristic Region (CFR) of southern Africa to test the utility of DNA barcodes in species identification and discrimination. We further explored the phylogenetic signal on fire response trait (reseeding and resprouting) at species and generic levels. We showed that Psoraleoid legumes of the CFR exhibit a barcoding gap yielding the combination of matK and rbcLa (matK + rbcLa) data set as a better barcode than single regions. We found a high score (100 %) of correct identification of individuals to their respective genera but a very low score (<50 %) in identifying them to species. We found a considerable match (54 %) between genetic species and morphologically delimited species. We also found that different lineages showed a weak but significant phylogenetic conservatism in their response to fire as reseeders or resprouters, with more clustering of resprouters than would be expected by chance. These novel microevolutionary patterns might be acting continuously over time to produce multi-scale regularities of biodiversity. This study provides the first insight into the DNA barcoding campaign of land plants in species identification and detection of the phylogenetic signal in recently diverged lineages of the CFR. PMID:26507570
Crow, Megan; Paul, Anirban; Ballouz, Sara; Huang, Z Josh; Gillis, Jesse
2018-02-28
Single-cell RNA-sequencing (scRNA-seq) technology provides a new avenue to discover and characterize cell types; however, the experiment-specific technical biases and analytic variability inherent to current pipelines may undermine its replicability. Meta-analysis is further hampered by the use of ad hoc naming conventions. Here we demonstrate our replication framework, MetaNeighbor, that quantifies the degree to which cell types replicate across datasets, and enables rapid identification of clusters with high similarity. We first measure the replicability of neuronal identity, comparing results across eight technically and biologically diverse datasets to define best practices for more complex assessments. We then apply this to novel interneuron subtypes, finding that 24/45 subtypes have evidence of replication, which enables the identification of robust candidate marker genes. Across tasks we find that large sets of variably expressed genes can identify replicable cell types with high accuracy, suggesting a general route forward for large-scale evaluation of scRNA-seq data.
Large-scale identification of chemically induced mutations in Drosophila melanogaster
Haelterman, Nele A.; Jiang, Lichun; Li, Yumei; Bayat, Vafa; Sandoval, Hector; Ugur, Berrak; Tan, Kai Li; Zhang, Ke; Bei, Danqing; Xiong, Bo; Charng, Wu-Lin; Busby, Theodore; Jawaid, Adeel; David, Gabriela; Jaiswal, Manish; Venken, Koen J.T.; Yamamoto, Shinya
2014-01-01
Forward genetic screens using chemical mutagens have been successful in defining the function of thousands of genes in eukaryotic model organisms. The main drawback of this strategy is the time-consuming identification of the molecular lesions causative of the phenotypes of interest. With whole-genome sequencing (WGS), it is now possible to sequence hundreds of strains, but determining which mutations are causative among thousands of polymorphisms remains challenging. We have sequenced 394 mutant strains, generated in a chemical mutagenesis screen, for essential genes on the Drosophila X chromosome and describe strategies to reduce the number of candidate mutations from an average of ∼3500 to 35 single-nucleotide variants per chromosome. By combining WGS with a rough mapping method based on large duplications, we were able to map 274 (∼70%) mutations. We show that these mutations are causative, using small 80-kb duplications that rescue lethality. Hence, our findings demonstrate that combining rough mapping with WGS dramatically expands the toolkit necessary for assigning function to genes. PMID:25258387
Cross-Identification of Astronomical Catalogs on Multiple GPUs
NASA Astrophysics Data System (ADS)
Lee, M. A.; Budavári, T.
2013-10-01
One of the most fundamental problems in observational astronomy is the cross-identification of sources. Observations are made in different wavelengths, at different times, and from different locations and instruments, resulting in a large set of independent observations. The scientific outcome is often limited by our ability to quickly perform meaningful associations between detections. The matching, however, is difficult scientifically, statistically, as well as computationally. The former two require detailed physical modeling and advanced probabilistic concepts; the latter is due to the large volumes of data and the problem's combinatorial nature. In order to tackle the computational challenge and to prepare for future surveys, whose measurements will be exponentially increasing in size past the scale of feasible CPU-based solutions, we developed a new implementation which addresses the issue by performing the associations on multiple Graphics Processing Units (GPUs). Our implementation utilizes up to 6 GPUs in combination with the Thrust library to achieve an over 40x speed up verses the previous best implementation running on a multi-CPU SQL Server.
Large-scale time-lapse microscopy of Oct4 expression in human embryonic stem cell colonies.
Bhadriraju, Kiran; Halter, Michael; Amelot, Julien; Bajcsy, Peter; Chalfoun, Joe; Vandecreme, Antoine; Mallon, Barbara S; Park, Kye-Yoon; Sista, Subhash; Elliott, John T; Plant, Anne L
2016-07-01
Identification and quantification of the characteristics of stem cell preparations is critical for understanding stem cell biology and for the development and manufacturing of stem cell based therapies. We have developed image analysis and visualization software that allows effective use of time-lapse microscopy to provide spatial and dynamic information from large numbers of human embryonic stem cell colonies. To achieve statistically relevant sampling, we examined >680 colonies from 3 different preparations of cells over 5days each, generating a total experimental dataset of 0.9 terabyte (TB). The 0.5 Giga-pixel images at each time point were represented by multi-resolution pyramids and visualized using the Deep Zoom Javascript library extended to support viewing Giga-pixel images over time and extracting data on individual colonies. We present a methodology that enables quantification of variations in nominally-identical preparations and between colonies, correlation of colony characteristics with Oct4 expression, and identification of rare events. Copyright © 2016. Published by Elsevier B.V.
A large-scale cryoelectronic system for biological sample banking
NASA Astrophysics Data System (ADS)
Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.
2009-11-01
We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.
Identification and Analysis of Antiviral Compounds Against Poliovirus.
Leyssen, Pieter; Franco, David; Tijsma, Aloys; Lacroix, Céline; De Palma, Armando; Neyts, Johan
2016-01-01
The Global Polio Eradication Initiative, launched in 1988, had as its goal the eradication of polio worldwide by the year 2000 through large-scale vaccinations campaigns with the live attenuated oral PV vaccine (OPV) (Griffiths et al., Biologicals 34:73-74, 2006). Despite substantial progress, polio remains endemic in several countries and new imported cases are reported on a regular basis ( http://www.polioeradication.org/casecount.asp ).It was recognized by the poliovirus research community that developing antivirals against poliovirus would be invaluable in the post-OPV era. Here, we describe three methods essential for the identification of selective inhibitors of poliovirus replication and for determining their mode of action by time-of-drug-addition studies as well as by the isolation of compound-resistant poliovirus variants.
Iino, Ryota; Matsumoto, Yoshimi; Nishino, Kunihiko; Yamaguchi, Akihito; Noji, Hiroyuki
2013-01-01
Single-cell analysis is a powerful method to assess the heterogeneity among individual cells, enabling the identification of very rare cells with properties that differ from those of the majority. In this Methods Article, we describe the use of a large-scale femtoliter droplet array to enclose, isolate, and analyze individual bacterial cells. As a first example, we describe the single-cell detection of drug-tolerant persisters of Pseudomonas aeruginosa treated with the antibiotic carbenicillin. As a second example, this method was applied to the single-cell evaluation of drug efflux activity, which causes acquired antibiotic resistance of bacteria. The activity of the MexAB-OprM multidrug efflux pump system from Pseudomonas aeruginosa was expressed in Escherichia coli and the effect of an inhibitor D13-9001 were assessed at the single cell level.
Mother Nature versus human nature: public compliance with evacuation and quarantine.
Manuell, Mary-Elise; Cukor, Jeffrey
2011-04-01
Effectively controlling the spread of contagious illnesses has become a critical focus of disaster planning. It is likely that quarantine will be a key part of the overall public health strategy utilised during a pandemic, an act of bioterrorism or other emergencies involving contagious agents. While the United States lacks recent experience of large-scale quarantines, it has considerable accumulated experience of large-scale evacuations. Risk perception, life circumstance, work-related issues, and the opinions of influential family, friends and credible public spokespersons all play a role in determining compliance with an evacuation order. Although the comparison is not reported elsewhere to our knowledge, this review of the principal factors affecting compliance with evacuations demonstrates many similarities with those likely to occur during a quarantine. Accurate identification and understanding of barriers to compliance allows for improved planning to protect the public more effectively. © 2011 The Author(s). Disasters © Overseas Development Institute, 2011.
Ubiquitinated Proteome: Ready for Global?*
Shi, Yi; Xu, Ping; Qin, Jun
2011-01-01
Ubiquitin (Ub) is a small and highly conserved protein that can covalently modify protein substrates. Ubiquitination is one of the major post-translational modifications that regulate a broad spectrum of cellular functions. The advancement of mass spectrometers as well as the development of new affinity purification tools has greatly expedited proteome-wide analysis of several post-translational modifications (e.g. phosphorylation, glycosylation, and acetylation). In contrast, large-scale profiling of lysine ubiquitination remains a challenge. Most recently, new Ub affinity reagents such as Ub remnant antibody and tandem Ub binding domains have been developed, allowing for relatively large-scale detection of several hundreds of lysine ubiquitination events in human cells. Here we review different strategies for the identification of ubiquitination site and discuss several issues associated with data analysis. We suggest that careful interpretation and orthogonal confirmation of MS spectra is necessary to minimize false positive assignments by automatic searching algorithms. PMID:21339389
MRMPROBS suite for metabolomics using large-scale MRM assays.
Tsugawa, Hiroshi; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Arita, Masanori
2014-08-15
We developed new software environment for the metabolome analysis of large-scale multiple reaction monitoring (MRM) assays. It supports the data format of four major mass spectrometer vendors and mzML common data format. This program provides a process pipeline from the raw-format import to high-dimensional statistical analyses. The novel aspect is graphical user interface-based visualization to perform peak quantification, to interpolate missing values and to normalize peaks interactively based on quality control samples. Together with the software platform, the MRM standard library of 301 metabolites with 775 transitions is also available, which contributes to the reliable peak identification by using retention time and ion abundances. MRMPROBS is available for Windows OS under the creative-commons by-attribution license at http://prime.psc.riken.jp. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Promoting R & D in photobiological hydrogen production utilizing mariculture-raised cyanobacteria.
Sakurai, Hidehiro; Masukawa, Hajime
2007-01-01
This review article explores the potential of using mariculture-raised cyanobacteria as solar energy converters of hydrogen (H(2)). The exploitation of the sea surface for large-scale renewable energy production and the reasons for selecting the economical, nitrogenase-based systems of cyanobacteria for H(2) production, are described in terms of societal benefits. Reports of cyanobacterial photobiological H(2) production are summarized with respect to specific activity, efficiency of solar energy conversion, and maximum H(2) concentration attainable. The need for further improvements in biological parameters such as low-light saturation properties, sustainability of H(2) production, and so forth, and the means to overcome these difficulties through the identification of promising wild-type strains followed by optimization of the selected strains using genetic engineering are also discussed. Finally, a possible mechanism for the development of economical large-scale mariculture operations in conjunction with international cooperation and social acceptance is outlined.
Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B
2013-03-23
Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.
Huang, Junfeng; Wang, Fangjun; Ye, Mingliang; Zou, Hanfa
2014-11-06
Comprehensive analysis of the post-translational modifications (PTMs) on proteins at proteome level is crucial to elucidate the regulatory mechanisms of various biological processes. In the past decades, thanks to the development of specific PTM enrichment techniques and efficient multidimensional liquid chromatography (LC) separation strategy, the identification of protein PTMs have made tremendous progress. A huge number of modification sites for some major protein PTMs have been identified by proteomics analysis. In this review, we first introduced the recent progresses of PTM enrichment methods for the analysis of several major PTMs including phosphorylation, glycosylation, ubiquitination, acetylation, methylation, and oxidation/reduction status. We then briefly summarized the challenges for PTM enrichment. Finally, we introduced the fractionation and separation techniques for efficient separation of PTM peptides in large-scale PTM analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
Listening to the Deep: live monitoring of ocean noise and cetacean acoustic signals.
André, M; van der Schaar, M; Zaugg, S; Houégnigan, L; Sánchez, A M; Castell, J V
2011-01-01
The development and broad use of passive acoustic monitoring techniques have the potential to help assessing the large-scale influence of artificial noise on marine organisms and ecosystems. Deep-sea observatories have the potential to play a key role in understanding these recent acoustic changes. LIDO (Listening to the Deep Ocean Environment) is an international project that is allowing the real-time long-term monitoring of marine ambient noise as well as marine mammal sounds at cabled and standalone observatories. Here, we present the overall development of the project and the use of passive acoustic monitoring (PAM) techniques to provide the scientific community with real-time data at large spatial and temporal scales. Special attention is given to the extraction and identification of high frequency cetacean echolocation signals given the relevance of detecting target species, e.g. beaked whales, in mitigation processes, e.g. during military exercises. Copyright © 2011. Published by Elsevier Ltd.
Li, Qi-Gang; He, Yong-Han; Wu, Huan; Yang, Cui-Ping; Pu, Shao-Yan; Fan, Song-Qing; Jiang, Li-Ping; Shen, Qiu-Shuo; Wang, Xiao-Xiong; Chen, Xiao-Qiong; Yu, Qin; Li, Ying; Sun, Chang; Wang, Xiangting; Zhou, Jumin; Li, Hai-Peng; Chen, Yong-Bin; Kong, Qing-Peng
2017-01-01
Heterogeneity in transcriptional data hampers the identification of differentially expressed genes (DEGs) and understanding of cancer, essentially because current methods rely on cross-sample normalization and/or distribution assumption-both sensitive to heterogeneous values. Here, we developed a new method, Cross-Value Association Analysis (CVAA), which overcomes the limitation and is more robust to heterogeneous data than the other methods. Applying CVAA to a more complex pan-cancer dataset containing 5,540 transcriptomes discovered numerous new DEGs and many previously rarely explored pathways/processes; some of them were validated, both in vitro and in vivo , to be crucial in tumorigenesis, e.g., alcohol metabolism ( ADH1B ), chromosome remodeling ( NCAPH ) and complement system ( Adipsin ). Together, we present a sharper tool to navigate large-scale expression data and gain new mechanistic insights into tumorigenesis.
Kaewphan, Suwisa; Van Landeghem, Sofie; Ohta, Tomoko; Van de Peer, Yves; Ginter, Filip; Pyysalo, Sampo
2016-01-01
Motivation: The recognition and normalization of cell line names in text is an important task in biomedical text mining research, facilitating for instance the identification of synthetically lethal genes from the literature. While several tools have previously been developed to address cell line recognition, it is unclear whether available systems can perform sufficiently well in realistic and broad-coverage applications such as extracting synthetically lethal genes from the cancer literature. In this study, we revisit the cell line name recognition task, evaluating both available systems and newly introduced methods on various resources to obtain a reliable tagger not tied to any specific subdomain. In support of this task, we introduce two text collections manually annotated for cell line names: the broad-coverage corpus Gellus and CLL, a focused target domain corpus. Results: We find that the best performance is achieved using NERsuite, a machine learning system based on Conditional Random Fields, trained on the Gellus corpus and supported with a dictionary of cell line names. The system achieves an F-score of 88.46% on the test set of Gellus and 85.98% on the independently annotated CLL corpus. It was further applied at large scale to 24 302 102 unannotated articles, resulting in the identification of 5 181 342 cell line mentions, normalized to 11 755 unique cell line database identifiers. Availability and implementation: The manually annotated datasets, the cell line dictionary, derived corpora, NERsuite models and the results of the large-scale run on unannotated texts are available under open licenses at http://turkunlp.github.io/Cell-line-recognition/. Contact: sukaew@utu.fi PMID:26428294
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Moreira, M. A.; Assuncao, G. V.; Novaes, R. A.; Mendoza, A. A. B.; Bauer, C. A.; Ritter, I. T.; Barros, J. A. I.; Perez, J. E.; Thedy, J. L. O.
1983-01-01
The objective was to test the feasibility of the application of MSS-LANDSAT data to irrigated rice crop identification and area evaluation, within four rice growing regions of the Rio Grande do Sul state, in order to extend the methodology for the whole state. The applied methodology was visual interpretation of the following LANDSAT products: channels 5 and 7 black and white imageries and color infrared composite imageries all at the scale of 1:250.000. For crop identification and evaluation, the multispectral criterion and the seasonal variation were utilized. Based on the results it was possible to conclude that: (1) the satellite data were efficient for crop area identification and evaluation; (2) the utilization of the multispectral criterion, allied to the seasonal variation of the rice crop areas from the other crops and, (3) the large cloud cover percentage found in the satellite data made it impossible to realize a rice crop spectral monitoring and, therefore, to define the best dates for such data acquisition for rice crop assessment.
Looking back on a decade of barcoding crustaceans
Raupach, Michael J.; Radulovici, Adriana E.
2015-01-01
Abstract Species identification represents a pivotal component for large-scale biodiversity studies and conservation planning but represents a challenge for many taxa when using morphological traits only. Consequently, alternative identification methods based on molecular markers have been proposed. In this context, DNA barcoding has become a popular and accepted method for the identification of unknown animals across all life stages by comparison to a reference library. In this review we examine the progress of barcoding studies for the Crustacea using the Web of Science data base from 2003 to 2014. All references were classified in terms of taxonomy covered, subject area (identification/library, genetic variability, species descriptions, phylogenetics, methods, pseudogenes/numts), habitat, geographical area, authors, journals, citations, and the use of the Barcode of Life Data Systems (BOLD). Our analysis revealed a total number of 164 barcoding studies for crustaceans with a preference for malacostracan crustaceans, in particular Decapoda, and for building reference libraries in order to identify organisms. So far, BOLD did not establish itself as a popular informatics platform among carcinologists although it offers many advantages for standardized data storage, analyses and publication. PMID:26798245
Energy scaling and reduction in controlling complex networks
Chen, Yu-Zhong; Wang, Le-Zhi; Wang, Wen-Xu; Lai, Ying-Cheng
2016-01-01
Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks. PMID:27152220
Large-scale DCMs for resting-state fMRI.
Razi, Adeel; Seghier, Mohamed L; Zhou, Yuan; McColgan, Peter; Zeidman, Peter; Park, Hae-Jeong; Sporns, Olaf; Rees, Geraint; Friston, Karl J
2017-01-01
This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity . This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI). We use spectral dynamic causal modeling (DCM) to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of to discover the most likely sparse graph (or model) from a parent (e.g., fully connected) graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity) graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM-with functional connectivity priors-is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.
USDA-ARS?s Scientific Manuscript database
Proper identification of soft scales (Hemiptera:Coccidae) requires preparation of the specimen on a microscope slide. This training video provides visual instruction on how to prepare soft scale specimens on microscope slides for examination and identification. Steps ranging from collection, speci...
K-State Problem Identification Rating Scales for College Students
ERIC Educational Resources Information Center
Robertson, John M.; Benton, Stephen L.; Newton, Fred B.; Downey, Ronald G.; Marsh, Patricia A.; Benton, Sheryl A.; Tseng, Wen-Chih; Shin, Kang-Hyun
2006-01-01
The K-State Problem Identification Rating Scales, a new screening instrument for college counseling centers, gathers information about clients' presenting symptoms, functioning levels, and readiness to change. Three studies revealed 7 scales: Mood Difficulties, Learning Problems, Food Concerns, Interpersonal Conflicts, Career Uncertainties,…
Prior knowledge based mining functional modules from Yeast PPI networks with gene ontology
2010-01-01
Background In the literature, there are fruitful algorithmic approaches for identification functional modules in protein-protein interactions (PPI) networks. Because of accumulation of large-scale interaction data on multiple organisms and non-recording interaction data in the existing PPI database, it is still emergent to design novel computational techniques that can be able to correctly and scalably analyze interaction data sets. Indeed there are a number of large scale biological data sets providing indirect evidence for protein-protein interaction relationships. Results The main aim of this paper is to present a prior knowledge based mining strategy to identify functional modules from PPI networks with the aid of Gene Ontology. Higher similarity value in Gene Ontology means that two gene products are more functionally related to each other, so it is better to group such gene products into one functional module. We study (i) to encode the functional pairs into the existing PPI networks; and (ii) to use these functional pairs as pairwise constraints to supervise the existing functional module identification algorithms. Topology-based modularity metric and complex annotation in MIPs will be used to evaluate the identified functional modules by these two approaches. Conclusions The experimental results on Yeast PPI networks and GO have shown that the prior knowledge based learning methods perform better than the existing algorithms. PMID:21172053
USDA-ARS?s Scientific Manuscript database
Proper identification of armored scales (Hemiptera: Diaspididae) requires preparation of the specimen on a microscope slide. This training video provides visual instruction on how to prepare armored scales specimens on microscope slides for examination and identification. Steps ranging from collect...
Damage identification of a TLP floating wind turbine by meta-heuristic algorithms
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.
2015-12-01
Damage identification of the offshore floating wind turbine by vibration/dynamic signals is one of the important and new research fields in the Structural Health Monitoring (SHM). In this paper a new damage identification method is proposed based on meta-heuristic algorithms using the dynamic response of the TLP (Tension-Leg Platform) floating wind turbine structure. The Genetic Algorithms (GA), Artificial Immune System (AIS), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC) are chosen for minimizing the object function, defined properly for damage identification purpose. In addition to studying the capability of mentioned algorithms in correctly identifying the damage, the effect of the response type on the results of identification is studied. Also, the results of proposed damage identification are investigated with considering possible uncertainties of the structure. Finally, for evaluating the proposed method in real condition, a 1/100 scaled experimental setup of TLP Floating Wind Turbine (TLPFWT) is provided in a laboratory scale and the proposed damage identification method is applied to the scaled turbine.
Miller, Douglass R.; Rung, Alessandra; Parikh, Grishma
2014-01-01
Abstract We provide a general overview of features and technical specifications of an online, interactive tool for the identification of scale insects of concern to the U.S.A. ports-of-entry. Full lists of terminal taxa included in the keys (of which there are four), a list of features used in them, and a discussion of the structure of the tool are provided. We also briefly discuss the advantages of interactive keys for the identification of potential scale insect pests. The interactive key is freely accessible on http://idtools.org/id/scales/index.php PMID:25152668
Laboratory Needs for Interstellar Ice Studies
NASA Astrophysics Data System (ADS)
Boogert, Abraham C. A.
2012-05-01
A large fraction of the molecules in dense interstellar and circumstellar environments is stored in icy grain mantles. The mantles are formed by a complex interplay between chemical and physical processes. Key questions on the accretion and desorption processes and the chemistry on the grain surfaces and within the icy mantles can only be answered by laboratory experiments. Recent infrared (2-30 micron) spectroscopic surveys of large samples of Young Stellar Objects (YSOs) and background stars tracing quiescent cloud material have shown that the ice band profiles and depths vary considerably as a function of environment. Using laboratory spectra in the identification process, it is clear that a rather complex mixture of simple species (CH3OH, CO2, H2O, CO) exists even in the quiescent cloud phase. Variations of the local physical conditions (CO freeze out) and time scales (CH3OH formation) appear to be key factors in the observed variations. Sublimation and thermal processing dominate as YSOs heat their environments. The identification of several ice absorption features is still disputed. I will outline laboratory work (e.g., on salts, PAHs, and aliphatic hydrocarbons) needed to further constrain the ice band identification as well as the thermal and chemical history of the carriers. Such experiments will also be essential to interpret future high spectral resolution SOFIA and JWST observations.
Xie, Xin-Ping; Xie, Yu-Feng; Wang, Hong-Qiang
2017-08-23
Large-scale accumulation of omics data poses a pressing challenge of integrative analysis of multiple data sets in bioinformatics. An open question of such integrative analysis is how to pinpoint consistent but subtle gene activity patterns across studies. Study heterogeneity needs to be addressed carefully for this goal. This paper proposes a regulation probability model-based meta-analysis, jGRP, for identifying differentially expressed genes (DEGs). The method integrates multiple transcriptomics data sets in a gene regulatory space instead of in a gene expression space, which makes it easy to capture and manage data heterogeneity across studies from different laboratories or platforms. Specifically, we transform gene expression profiles into a united gene regulation profile across studies by mathematically defining two gene regulation events between two conditions and estimating their occurring probabilities in a sample. Finally, a novel differential expression statistic is established based on the gene regulation profiles, realizing accurate and flexible identification of DEGs in gene regulation space. We evaluated the proposed method on simulation data and real-world cancer datasets and showed the effectiveness and efficiency of jGRP in identifying DEGs identification in the context of meta-analysis. Data heterogeneity largely influences the performance of meta-analysis of DEGs identification. Existing different meta-analysis methods were revealed to exhibit very different degrees of sensitivity to study heterogeneity. The proposed method, jGRP, can be a standalone tool due to its united framework and controllable way to deal with study heterogeneity.
Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fyke, Jeremy; Lenaerts, Jan T. M.; Wang, Hailong
Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and icemore » core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.« less
Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability
Fyke, Jeremy; Lenaerts, Jan T. M.; Wang, Hailong
2017-11-15
Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and icemore » core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.« less
A Survey of School Psychologists' Practices for Identifying Mentally Retarded Students.
ERIC Educational Resources Information Center
Wodrich, David L.; Barry, Christine T.
1991-01-01
Surveyed school psychologists regarding identification of mentally retarded students. The Wechsler scales were the most frequently used tests for deriving intelligence quotient scores, which together with adaptive behavior scale scores were rated as most influential in identification-placement decisions. The Vineland Adaptive Behavior Scales were…
Towards large-scale FAME-based bacterial species identification using machine learning techniques.
Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul
2009-05-01
In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species identification strategy.
Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti
2016-01-01
The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient's cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using a k-NN classifier showing improved accuracy compared to earlier studies.
Gram-scale purification of aconitine and identification of lappaconitine in Aconitum karacolicum.
Tarbe, M; de Pomyers, H; Mugnier, L; Bertin, D; Ibragimov, T; Gigmes, D; Mabrouk, K
2017-07-01
Aconitum karacolicum from northern Kyrgyzstan (Alatau area) contains about 0.8-1% aconitine as well as other aconite derivatives that have already been identified. In this paper, we compare several methods for the further purification of an Aconitum karacolicum extract initially containing 80% of aconitine. Reverse-phase flash chromatography, reverse-phase semi-preparative HPLC, centrifugal partition chromatography (CPC) and recrystallization techniques were evaluated regarding first their efficiency to get the highest purity of aconitine (over 96%) and secondly their applicability in a semi-industrial scale purification process (in our case, 150g of plant extract). Even if the CPC technique shows the highest purification yield (63%), the recrystallization remains the method of choice to purify a large amount of aconitine as i) it can be easily carried out in safe conditions; ii) an aprotic solvent is used, avoiding aconitine degradation. Moreover, this study led us to the identification of lappaconitine in Aconitum karacolicum, a well-known alkaloid never found in this Aconitum species. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muchero, Wellington; Labbe, Jessy L; Priya, Ranjan
2014-01-01
To date, Populus ranks among a few plant species with a complete genome sequence and other highly developed genomic resources. With the first genome sequence among all tree species, Populus has been adopted as a suitable model organism for genomic studies in trees. However, far from being just a model species, Populus is a key renewable economic resource that plays a significant role in providing raw materials for the biofuel and pulp and paper industries. Therefore, aside from leading frontiers of basic tree molecular biology and ecological research, Populus leads frontiers in addressing global economic challenges related to fuel andmore » fiber production. The latter fact suggests that research aimed at improving quality and quantity of Populus as a raw material will likely drive the pursuit of more targeted and deeper research in order to unlock the economic potential tied in molecular biology processes that drive this tree species. Advances in genome sequence-driven technologies, such as resequencing individual genotypes, which in turn facilitates large scale SNP discovery and identification of large scale polymorphisms are key determinants of future success in these initiatives. In this treatise we discuss implications of genome sequence-enable technologies on Populus genomic and genetic studies of complex and specialized-traits.« less
NASA Astrophysics Data System (ADS)
Veitinger, Jochen; Purves, Ross Stuart; Sovilla, Betty
2016-10-01
Avalanche hazard assessment requires a very precise estimation of the release area, which still depends, to a large extent, on expert judgement of avalanche specialists. Therefore, a new algorithm for automated identification of potential avalanche release areas was developed. It overcomes some of the limitations of previous tools, which are currently not often applied in hazard mitigation practice. By introducing a multi-scale roughness parameter, fine-scale topography and its attenuation under snow influence is captured. This allows the assessment of snow influence on terrain morphology and, consequently, potential release area size and location. The integration of a wind shelter index enables the user to define release area scenarios as a function of the prevailing wind direction or single storm events. A case study illustrates the practical usefulness of this approach for the definition of release area scenarios under varying snow cover and wind conditions. A validation with historical data demonstrated an improved estimation of avalanche release areas. Our method outperforms a slope-based approach, in particular for more frequent avalanches; however, the application of the algorithm as a forecasting tool remains limited, as snowpack stability is not integrated. Future research activity should therefore focus on the coupling of the algorithm with snowpack conditions.
Farag, Mohamed A; Huhman, David V; Lei, Zhentian; Sumner, Lloyd W
2007-02-01
An integrated approach utilizing HPLC-UV-ESI-MS and GC-MS was used for the large-scale and systematic identification of polyphenols in Medicago truncatula root and cell culture. Under optimized conditions, we were able to simultaneously quantify and identify 35 polyphenols including 26 isoflavones, 3 flavones, 2 flavanones, 2 aurones and a chalcone. All identifications were based upon UV spectra, mass spectral characteristics of protonated molecules, tandem mass spectral data, mass measurements obtained using a quadrupole time-of-flight mass spectrometer (QtofMS), and confirmed through the co-characterization of authentic compounds. In specific instances where the stereochemistry of sugar conjugates was uncertain, subsequent enzymatic hydrolysis of the conjugate followed by GC-MS was used to assign the sugar stereochemical configuration. Comparative metabolic profiling of Medicago truncatula root and cell cultures was then performed and revealed significant differences in the isoflavonoid composition of these two tissues.
Winchester, L; Newbury, D F; Monaco, A P; Ragoussis, J
2008-01-01
Copy Number Variants (CNV) and other submicroscopic structural changes are now recognised to be widespread across the human genome. We show that SNP data generated for association study can be utilised for the identification of deletion CNVs. During analysis of data for an SNP association study for Specific Language Impairment (SLI) a deletion was identified. SLI adversely affects the language development of children in the absence of any obvious cause. Previous studies have found linkage to a region on chromosome 16. The deletion was located in a known fragile site FRA16D in intron 5-6 of the WWOX gene (also known as FOR). Changes in the FRA16D site have been previously linked to cancer and are often characterised in cell lines. A long-range PCR assay was used to confirm the existence of the deletion. We also show the breakpoint identification and large-scale characterisation of this CNV in a normal human sample set. Copyright 2009 S. Karger AG, Basel.
Technological advancements and their importance for nematode identification
NASA Astrophysics Data System (ADS)
Ahmed, Mohammed; Sapp, Melanie; Prior, Thomas; Karssen, Gerrit; Back, Matthew Alan
2016-06-01
Nematodes represent a species-rich and morphologically diverse group of metazoans known to inhabit both aquatic and terrestrial environments. Their role as biological indicators and as key players in nutrient cycling has been well documented. Some plant-parasitic species are also known to cause significant losses to crop production. In spite of this, there still exists a huge gap in our knowledge of their diversity due to the enormity of time and expertise often involved in characterising species using phenotypic features. Molecular methodology provides useful means of complementing the limited number of reliable diagnostic characters available for morphology-based identification. We discuss herein some of the limitations of traditional taxonomy and how molecular methodologies, especially the use of high-throughput sequencing, have assisted in carrying out large-scale nematode community studies and characterisation of phytonematodes through rapid identification of multiple taxa. We also provide brief descriptions of some the current and almost-outdated high-throughput sequencing platforms and their applications in both plant nematology and soil ecology.
Sim, Jaehyun; Sim, Jun; Park, Eunsung; Lee, Julian
2015-06-01
Many proteins undergo large-scale motions where relatively rigid domains move against each other. The identification of rigid domains, as well as the hinge residues important for their relative movements, is important for various applications including flexible docking simulations. In this work, we develop a method for protein rigid domain identification based on an exhaustive enumeration of maximal rigid domains, the rigid domains not fully contained within other domains. The computation is performed by mapping the problem to that of finding maximal cliques in a graph. A minimal set of rigid domains are then selected, which cover most of the protein with minimal overlap. In contrast to the results of existing methods that partition a protein into non-overlapping domains using approximate algorithms, the rigid domains obtained from exact enumeration naturally contain overlapping regions, which correspond to the hinges of the inter-domain bending motion. The performance of the algorithm is demonstrated on several proteins. © 2015 Wiley Periodicals, Inc.
[Identification of mouse brain neuropeptides by high throughput mass spectrometry].
Shao, Xianfeng; Ma, Min; Chen, Ruibing; Jia, Chenxi
2018-04-25
Neuropeptides play an important role in the physiological functions of the human body. The physiological activities such as pain, sleep, mood, learning and memory are affected by neuropeptides. Neuropeptides mainly exist in the nerve tissue of the body, and a small amount of them are distributed in body fluid and organs. At present, analysis of large-scale identification of neuropeptides in whole brain tissue is still challenging. Therefore, high-throughput detection of these neuropeptides is greatly significant to understand the composition and function of neuropeptides. In this study, 1 830 endogenous peptides and 99 novel putative neuropeptides were identified by extraction of endogenous peptides from whole brain tissue of mice by liquid phase tandem mass spectrometry (LC-MS / MS). The identification of these endogenous peptides provides not only a reference value in the treatment and mechanism studies of diseases and the development of drugs, but also the basis for the study of a new neuropeptides and their functions.
Detection of High Energy Cosmic Ray with the Advanced Thin Ionization Calorimeter (ATIC)
NASA Technical Reports Server (NTRS)
Fazely, Ali R.
2003-01-01
ATIC is a balloon-borne investigation of cosmic ray spectra, from below 50 GeV to near 100 TeV total energy, using a fully active Bismuth Gemmate (BGO) calorimeter. It is equipped with the first large area mosaic of small fully depleted silicon detector pixels capable of charge identification in cosmic rays from H to Fe. As a redundancy check for the charge identification and a coarse particle tracking system, three projective layers of x-y scintillator hodoscopes were employed, above, in the center and below a Carbon interaction 'target'. Very high energy gamma-rays and their energy spectrum may provide insight to the flux of extremely high energy neutrinos which will be investigated in detail with several proposed cubic kilometer scale neutrino observatories in the next decade.
Fan, Long; Hui, Jerome H L; Yu, Zu Guo; Chu, Ka Hou
2014-07-01
Species identification based on short sequences of DNA markers, that is, DNA barcoding, has emerged as an integral part of modern taxonomy. However, software for the analysis of large and multilocus barcoding data sets is scarce. The Basic Local Alignment Search Tool (BLAST) is currently the fastest tool capable of handling large databases (e.g. >5000 sequences), but its accuracy is a concern and has been criticized for its local optimization. However, current more accurate software requires sequence alignment or complex calculations, which are time-consuming when dealing with large data sets during data preprocessing or during the search stage. Therefore, it is imperative to develop a practical program for both accurate and scalable species identification for DNA barcoding. In this context, we present VIP Barcoding: a user-friendly software in graphical user interface for rapid DNA barcoding. It adopts a hybrid, two-stage algorithm. First, an alignment-free composition vector (CV) method is utilized to reduce searching space by screening a reference database. The alignment-based K2P distance nearest-neighbour method is then employed to analyse the smaller data set generated in the first stage. In comparison with other software, we demonstrate that VIP Barcoding has (i) higher accuracy than Blastn and several alignment-free methods and (ii) higher scalability than alignment-based distance methods and character-based methods. These results suggest that this platform is able to deal with both large-scale and multilocus barcoding data with accuracy and can contribute to DNA barcoding for modern taxonomy. VIP Barcoding is free and available at http://msl.sls.cuhk.edu.hk/vipbarcoding/. © 2014 John Wiley & Sons Ltd.
Adalja, Amesh A; Watson, Matthew; Wollner, Samuel; Toner, Eric
2011-12-01
After the detonation of an improvised nuclear device, several key actions will be necessary to save the greatest number of lives possible. Among these tasks, the identification of patients with impending acute radiation sickness is a critical problem that so far has lacked a clear solution in national planning. We present one possible solution: the formation of a public-private partnership to augment the capacity to identify those at risk for acute radiation sickness. © Mary Ann Liebert, Inc.
Seo, Moon-Hyeong; Nim, Satra; Jeon, Jouhyun; Kim, Philip M
2017-01-01
Protein-protein interactions are essential to cellular functions and signaling pathways. We recently combined bioinformatics and custom oligonucleotide arrays to construct custom-made peptide-phage libraries for screening peptide-protein interactions, an approach we call proteomic peptide-phage display (ProP-PD). In this chapter, we describe protocols for phage display for the identification of natural peptide binders for a given protein. We finally describe deep sequencing for the analysis of the proteomic peptide-phage display.
As genomics advances reveal the cancer gene landscape, a daunting task is to understand how these genes contribute to dysregulated oncogenic pathways. Integration of cancer genes into networks offers opportunities to reveal protein–protein interactions (PPIs) with functional and therapeutic significance. Here, we report the generation of a cancer-focused PPI network, termed OncoPPi, and identification of >260 cancer-associated PPIs not in other large-scale interactomes.
Shen, Dan-na; Yi, Xu-fu; Chen, Xiao-gang; Xu, Tong-li; Cui, Li-juan
2007-10-01
Individual response to drugs, toxicants, environmental chemicals and allergens varies with genotype. Some respond well to these substances without significant consequences, while others may respond strongly with severe consequences and even death. Toxicogenetics and toxicogenomics as well as pharmacogenetics explain the genetic basis for the variations of individual response to toxicants by sequencing the human genome and large-scale identification of genome polymorphism. The new disciplines will provide a new route for forensic specialists to determine the cause of death.
1987-09-01
77) Large scale purification of the acetylcholine receptor protein In its membrane-bound and detergent extracted forms from Torpedo marmorata...maintenance of the postsynaptic apparatus in the adult. Our studies have alac led to the Identification of agrin, a protein that is extracted from the synapse...in extracts of muscle, and monoclonal antibodies directed against &grin recognize molecules highly concentrated in the synaptic basal lamina at the
Geologic Reconnaissance and Lithologic Identification by Remote Sensing
remote sensing in geologic reconnaissance for purposes of tunnel site selection was studied further and a test case was undertaken to evaluate this geological application. Airborne multispectral scanning (MSS) data were obtained in May, 1972, over a region between Spearfish and Rapid City, South Dakota. With major effort directed toward the analysis of these data, the following geologic features were discriminated: (1) exposed rock areas, (2) five separate rock groups, (3) large-scale structures. This discrimination was accomplished by ratioing multispectral channels.
Manual of downburst identification for Project NIMROD. [atmospheric circulation
NASA Technical Reports Server (NTRS)
Fujita, T. T.
1978-01-01
Aerial photography, Doppler radar, and satellite infrared imagery are used in the two year National Intensive Meteorological Research on Downburst (NIMROD) project to provide large area mapping of strong downdrafts that induce an outward burst of damaging winds over or near the earth. Topics discussed include scales of thunderstorm outflow; aerial photographs of downburst damage; microbursts and aviation hazards; radar echo characteristics; infrared imagery from GOES/SMS; and downburts-tornado relationships. Color maps of downbursts and tornadoes are included.
Relative scale and the strength and deformability of rock masses
NASA Astrophysics Data System (ADS)
Schultz, Richard A.
1996-09-01
The strength and deformation of rocks depend strongly on the degree of fracturing, which can be assessed in the field and related systematically to these properties. Appropriate Mohr envelopes obtained from the Rock Mass Rating (RMR) classification system and the Hoek-Brown criterion for outcrops and other large-scale exposures of fractured rocks show that rock-mass cohesive strength, tensile strength, and unconfined compressive strength can be reduced by as much as a factor often relative to values for the unfractured material. The rock-mass deformation modulus is also reduced relative to Young's modulus. A "cook-book" example illustrates the use of RMR in field applications. The smaller values of rock-mass strength and deformability imply that there is a particular scale of observation whose identification is critical to applying laboratory measurements and associated failure criteria to geologic structures.
Fleischmann, Fenella; Phalet, Karen
2017-01-01
How inclusive are European national identities of Muslim minorities and how can we explain cross-cultural variation in inclusiveness? To address these questions, we draw on large-scale school-based surveys of Muslim minority and non-Muslim majority and other minority youth in five European countries (Children of Immigrants Longitudinal Survey [CILS]; Belgium, England, Germany, the Netherlands, and Sweden). Our double comparison of national identification across groups and countries reveals that national identities are less strongly endorsed by all minorities compared with majority youth, but national identification is lowest among Muslims. This descriptive evidence resonates with public concerns about the insufficient inclusion of immigrant minorities in general, and Muslims in particular, in European national identities. In addition, significant country variation in group differences in identification suggest that some national identities are more inclusive of Muslims than others. Taking an intergroup relations approach to the inclusiveness of national identities for Muslims, we establish that beyond religious commitment, positive intergroup contact (majority friendship) plays a major role in explaining differences in national identification in multigroup multilevel mediation models, whereas experiences of discrimination in school do not contribute to this explanation. Our comparative findings thus establish contextual variation in the inclusiveness of intergroup relations and European national identities for Muslim minorities. PMID:29386688
Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D
2010-03-01
Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Predicting Regional Self-identification from Spatial Network Models
Almquist, Zack W.; Butts, Carter T.
2014-01-01
Social scientists characterize social life as a hierarchy of environments, from the micro level of an individual’s knowledge and perceptions to the macro level of large-scale social networks. In accordance with this typology, individuals are typically thought to reside in micro- and macro-level structures, composed of multifaceted relations (e.g., acquaintanceship, friendship, and kinship). This article analyzes the effects of social structure on micro outcomes through the case of regional identification. Self identification occurs in many different domains, one of which is regional; i.e., the identification of oneself with a locationally-associated group (e.g., a “New Yorker” or “Parisian”). Here, regional self-identification is posited to result from an influence process based on the location of an individual’s alters (e.g., friends, kin or coworkers), such that one tends to identify with regions in which many of his or her alters reside. The structure of this paper is laid out as follows: initially, we begin with a discussion of the relevant social science literature for both social networks and identification. This discussion is followed with one about competing mechanisms for regional identification that are motivated first from the social network literature, and second by the social psychological and cognitive literature of decision making and heuristics. Next, the paper covers the data and methods employed to test the proposed mechanisms. Finally, the paper concludes with a discussion of its findings and further implications for the larger social science literature. PMID:25684791
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
Integrating Green and Blue Water Management Tools for Land and Water Resources Planning
NASA Astrophysics Data System (ADS)
Jewitt, G. P. W.
2009-04-01
The role of land use and land use change on the hydrological cycle is well known. However, the impacts of large scale land use change are poorly considered in water resources planning, unless they require direct abstraction of water resources and associated development of infrastructure e.g. Irrigation Schemes. However, large scale deforestation for the supply of raw materials, expansion of the areas of plantation forestry, increasing areas under food production and major plans for cultivation of biofuels in many developing countries are likely to result in extensive land use change. Given the spatial extent and temporal longevity of these proposed developments, major impacts on water resources are inevitable. It is imperative that managers and planners consider the consequences for downstream ecosystems and users in such developments. However, many popular tools, such as the vitual water approach, provide only coarse scale "order of magnitude" type estimates with poor consideration of, and limited usefulness, for land use planning. In this paper, a framework for the consideration of the impacts of large scale land use change on water resources at a range of temporal and spatial scales is presented. Drawing on experiences from South Africa, where the establishment of exotic commercial forest plantations is only permitted once a water use license has been granted, the framework adopts the "green water concept" for the identification of potential high impact areas of land use change and provides for integration with traditional "blue water" water resources planning tools for more detailed planning. Appropriate tools, ranging from simple spreadsheet solutions to more sophisticated remote sensing and hydrological models are described, and the application of the framework for consideration of water resources impacts associated with the establishment of large scale tectona grandis, sugar cane and jatropha curcas plantations is illustrated through examples in Mozambique and South Africa. Keywords: Land use change, water resources, green water, blue water, biofuels, developing countries
Sweeten, Sara E.; Ford, W. Mark
2015-01-01
Large-scale land uses such as residential wastewater discharge and coal mining practices, particularly surface coal extraction and associated valley fills, are of particular ecological concern in central Appalachia. Identification and quantification of both alterations across scales are a necessary first-step to mitigate negative consequences to biota. In central Appalachian headwater streams absent of fish, salamanders are the dominant, most abundant vertebrate predator providing a significant intermediate trophic role. Stream salamander species are considered to be sensitive to aquatic stressors and environmental alterations, and past research has shown linkages among microhabitat parameters, large-scale land use such as urbanization and logging with salamander abundances. However, little is known about these linkages in the coalfields of central Appalachia. In the summer of 2013, we visited 70 sites (sampled three times each) in the southwest Virginia coalfields to survey salamanders and quantify stream and riparian microhabitat parameters. Using an information-theoretic framework we compared the effects of microhabitat and large-scale land use on salamander abundances. Our findings indicate that dusky salamander (Desmognathus spp.) abundances are more correlated to microhabitat parameters such as canopy cover than to subwatershed land uses. Brook salamander (Eurycea spp.) abundances show strong negative associations to the suspended sediments and stream substrate embeddedness. Neither Desmognathus spp. nor Eurycea spp. abundances were influenced by water conductivity. These suggest protection or restoration of riparian habitats and erosion control is an important conservation component for maintaining stream salamanders in the mined landscapes of central Appalachia.
Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification
NASA Astrophysics Data System (ADS)
Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi
2017-03-01
In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.
Wang, Jian; Xie, Dong; Lin, Hongfei; Yang, Zhihao; Zhang, Yijia
2012-06-21
Many biological processes recognize in particular the importance of protein complexes, and various computational approaches have been developed to identify complexes from protein-protein interaction (PPI) networks. However, high false-positive rate of PPIs leads to challenging identification. A protein semantic similarity measure is proposed in this study, based on the ontology structure of Gene Ontology (GO) terms and GO annotations to estimate the reliability of interactions in PPI networks. Interaction pairs with low GO semantic similarity are removed from the network as unreliable interactions. Then, a cluster-expanding algorithm is used to detect complexes with core-attachment structure on filtered network. Our method is applied to three different yeast PPI networks. The effectiveness of our method is examined on two benchmark complex datasets. Experimental results show that our method performed better than other state-of-the-art approaches in most evaluation metrics. The method detects protein complexes from large scale PPI networks by filtering GO semantic similarity. Removing interactions with low GO similarity significantly improves the performance of complex identification. The expanding strategy is also effective to identify attachment proteins of complexes.
Performance of b-jet identification in the ATLAS experiment
Aad, G; Abbott, B; Abdallah, J; ...
2016-04-04
The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag ratemore » for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.« less
Zhang, Guoqing; Zhang, Xianku; Pang, Hongshuai
2015-09-01
This research is concerned with the problem of 4 degrees of freedom (DOF) ship manoeuvring identification modelling with the full-scale trial data. To avoid the multi-innovation matrix inversion in the conventional multi-innovation least squares (MILS) algorithm, a new transformed multi-innovation least squares (TMILS) algorithm is first developed by virtue of the coupling identification concept. And much effort is made to guarantee the uniformly ultimate convergence. Furthermore, the auto-constructed TMILS scheme is derived for the ship manoeuvring motion identification by combination with a statistic index. Comparing with the existing results, the proposed scheme has the significant computational advantage and is able to estimate the model structure. The illustrative examples demonstrate the effectiveness of the proposed algorithm, especially including the identification application with full-scale trial data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Assessment of management and basic beef quality assurance practices on Idaho dairies.
Glaze, J B; Chahine, M
2009-03-01
In 2004 a mail-in survey was conducted to establish a baseline level of awareness and knowledge related to dairy beef quality assurance (BQA) issues in Idaho. A 30-question survey was mailed to every (n = 736) registered Idaho dairy. Two-hundred seventy-three (37%) dairies participated and were categorized as small (n <201 cows; 53.5%), medium-sized (n = 201 to 1,000 cows; 27.1%) or large (n >1,000 cows; 19.4%). The majority of respondents were dairy owners (83%). Eighty-nine percent of respondents indicated they followed BQA recommendations for animal care. The neck region in cows was used by 68% of respondents for i.m. injections and by 80% for s.c. injections. In calves, the values were 61 and 78%, respectively. Seventy-four percent of respondents indicated they had been trained for injections. Training methods cited included veterinarians (19.8%), dairy owners (16.8%), experience (9.9%), and BQA events or schools (4.5%). The importance of BQA in the dairy industry was rated 2.6 on a 5-point scale (0 = low; 4 = high). Participants rated the effect of dairy animals on the beef industry at 2.5. Plastic ear tags were the preferred method of animal identification, with 100% of large dairies, 97.3% of medium-sized dairies, and 84% of small dairies citing their use. Less than 10% used electronic identification for their animals. Almost half (48%) of large and medium-sized (49%) dairies and 32% of small dairies supported a national animal identification program. A mandatory identification program was supported by 41, 69, and 59% for small, medium-sized, and large dairies, respectively. The percentage of dairies keeping records was similar between small (93%), medium-sized (99%), and large (100%) dairies. Most small dairies (58%) used some form of paper records, whereas most medium-sized (85%) and large (100%) dairies used computers for record keeping. The preferred method to market cull cows by Idaho dairies was the auction market (64%), followed by order buyers (17%), direct to the packer (17%), private treaty sales (16%), and forward contracts (1%). To market calves, dairies used private treaty sales (52%), auction markets (42%), order buyers (14%), and forward contracts (1%). The results of this study will be used by University of Idaho Extension faculty in the design, development, and delivery of dairy BQA program information and materials.
Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E
2016-07-25
Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.
An intermediate level of abstraction for computational systems chemistry.
Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F
2017-12-28
Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).
2013-01-01
Background Poverty is multi dimensional. Beyond the quantitative and tangible issues related to inadequate income it also has equally important social, more intangible and difficult if not impossible to quantify dimensions. In 2009, we explored these social and relativist dimension of poverty in five communities in the South of Ghana with differing socio economic characteristics to inform the development and implementation of policies and programs to identify and target the poor for premium exemptions under Ghana’s National Health Insurance Scheme. Methods We employed participatory wealth ranking (PWR) a qualitative tool for the exploration of community concepts, identification and ranking of households into socioeconomic groups. Key informants within the community ranked households into wealth categories after discussing in detail concepts and indicators of poverty. Results Community defined indicators of poverty covered themes related to type of employment, educational attainment of children, food availability, physical appearance, housing conditions, asset ownership, health seeking behavior, social exclusion and marginalization. The poverty indicators discussed shared commonalities but contrasted in the patterns of ranking per community. Conclusion The in-depth nature of the PWR process precludes it from being used for identification of the poor on a large national scale in a program such as the NHIS. However, PWR can provide valuable qualitative input to enrich discussions, development and implementation of policies, programs and tools for large scale interventions and targeting of the poor for social welfare programs such as premium exemption for health care. PMID:23497484
Aryeetey, Genevieve C; Jehu-Appiah, Caroline; Kotoh, Agnes M; Spaan, Ernst; Arhinful, Daniel K; Baltussen, Rob; van der Geest, Sjaak; Agyepong, Irene A
2013-03-14
Poverty is multi dimensional. Beyond the quantitative and tangible issues related to inadequate income it also has equally important social, more intangible and difficult if not impossible to quantify dimensions. In 2009, we explored these social and relativist dimension of poverty in five communities in the South of Ghana with differing socio economic characteristics to inform the development and implementation of policies and programs to identify and target the poor for premium exemptions under Ghana's National Health Insurance Scheme. We employed participatory wealth ranking (PWR) a qualitative tool for the exploration of community concepts, identification and ranking of households into socioeconomic groups. Key informants within the community ranked households into wealth categories after discussing in detail concepts and indicators of poverty. Community defined indicators of poverty covered themes related to type of employment, educational attainment of children, food availability, physical appearance, housing conditions, asset ownership, health seeking behavior, social exclusion and marginalization. The poverty indicators discussed shared commonalities but contrasted in the patterns of ranking per community. The in-depth nature of the PWR process precludes it from being used for identification of the poor on a large national scale in a program such as the NHIS. However, PWR can provide valuable qualitative input to enrich discussions, development and implementation of policies, programs and tools for large scale interventions and targeting of the poor for social welfare programs such as premium exemption for health care.
Volcovich, Romina; Altcheh, Jaime; Bracamonte, Estefanía; Marco, Jorge D.; Nielsen, Morten; Buscaglia, Carlos A.
2017-01-01
Chagas Disease, caused by the protozoan Trypanosoma cruzi, is a major health and economic problem in Latin America for which no vaccine or appropriate drugs for large-scale public health interventions are yet available. Accurate diagnosis is essential for the early identification and follow up of vector-borne cases and to prevent transmission of the disease by way of blood transfusions and organ transplantation. Diagnosis is routinely performed using serological methods, some of which require the production of parasite lysates, parasite antigenic fractions or purified recombinant antigens. Although available serological tests give satisfactory results, the production of reliable reagents remains laborious and expensive. Short peptides spanning linear B-cell epitopes have proven ideal serodiagnostic reagents in a wide range of diseases. Recently, we have conducted a large-scale screening of T. cruzi linear B-cell epitopes using high-density peptide chips, leading to the identification of several hundred novel sequence signatures associated to chronic Chagas Disease. Here, we performed a serological assessment of 27 selected epitopes and of their use in a novel multipeptide-based diagnostic method. A combination of 7 of these peptides were finally evaluated in ELISA format against a panel of 199 sera samples (Chagas-positive and negative, including sera from Leishmaniasis-positive subjects). The multipeptide formulation displayed a high diagnostic performance, with a sensitivity of 96.3% and a specificity of 99.15%. Therefore, the use of synthetic peptides as diagnostic tools are an attractive alternative in Chagas’ disease diagnosis. PMID:28991925
Chen, Changlong; Chen, Yongpan; Jian, Heng; Yang, Dan; Dai, Yiran; Pan, Lingling; Shi, Fengwei; Yang, Shanshan; Liu, Qian
2018-01-01
Heterodera avenae is one of the most important plant pathogens and causes vast losses in cereal crops. As a sedentary endoparasitic nematode, H. avenae secretes effectors that modify plant defenses and promote its biotrophic infection of its hosts. However, the number of effectors involved in the interaction between H. avenae and host defenses remains unclear. Here, we report the identification of putative effectors in H. avenae that regulate plant defenses on a large scale. Our results showed that 78 of the 95 putative effectors suppressed programmed cell death (PCD) triggered by BAX and that 7 of the putative effectors themselves caused cell death in Nicotiana benthamiana. Among the cell-death-inducing effectors, three were found to be dependent on their specific domains to trigger cell death and to be expressed in esophageal gland cells by in situ hybridization. Ten candidate effectors that suppressed BAX-triggered PCD also suppressed PCD triggered by the elicitor PsojNIP and at least one R-protein/cognate effector pair, suggesting that they are active in suppressing both pattern-triggered immunity (PTI) and effector-triggered immunity (ETI). Notably, with the exception of isotig16060, these putative effectors could also suppress PCD triggered by cell-death-inducing effectors from H. avenae, indicating that those effectors may cooperate to promote nematode parasitism. Collectively, our results indicate that the majority of the tested effectors of H. avenae may play important roles in suppressing cell death induced by different elicitors in N. benthamiana. PMID:29379510
Multiscale global identification of porous structures
NASA Astrophysics Data System (ADS)
Hatłas, Marcin; Beluch, Witold
2018-01-01
The paper is devoted to the evolutionary identification of the material constants of porous structures based on measurements conducted on a macro scale. Numerical homogenization with the RVE concept is used to determine the equivalent properties of a macroscopically homogeneous material. Finite element method software is applied to solve the boundary-value problem in both scales. Global optimization methods in form of evolutionary algorithm are employed to solve the identification task. Modal analysis is performed to collect the data necessary for the identification. A numerical example presenting the effectiveness of proposed attitude is attached.
A Comparative Analysis of Coprologic Diagnostic Methods for Detection of Toxoplama gondii in Cats
Salant, Harold; Spira, Dan T.; Hamburger, Joseph
2010-01-01
The relative role of transmission of Toxoplasma gondii infection from cats to humans appears to have recently increased in certain areas. Large-scale screening of oocyst shedding in cats cannot rely on microscopy because oocyst identification lacks sensitivity and specificity, or on bioassays, which require test animals and weeks before examination. We compared a sensitive and species-specific coprologic–polymerase chain reaction (copro-PCR) for detection of T. gondii infected cats with microscopy and a bioassay. In experimentally infected cats followed over time, microscopy was positive occasionally, and positive copro-PCR and bioassay results were obtained continuously from days 2 to 24 post-infection. The copro-PCR is at least as sensitive and specific as the bioassay and is capable of detecting infective oocysts during cat infection. Therefore, this procedure can be used as the new gold standard for determining potential cat infectivity. Its technologic advantages over the bioassay make it superior for large-scale screening of cats. PMID:20439968
Efficient collective influence maximization in cascading processes with first-order transitions
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-01-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988
Infrared Multiphoton Dissociation for Quantitative Shotgun Proteomics
Ledvina, Aaron R.; Lee, M. Violet; McAlister, Graeme C.; Westphall, Michael S.; Coon, Joshua J.
2012-01-01
We modified a dual-cell linear ion trap mass spectrometer to perform infrared multiphoton dissociation (IRMPD) in the low pressure trap of a dual-cell quadrupole linear ion trap (dual cell QLT) and perform large-scale IRMPD analyses of complex peptide mixtures. Upon optimization of activation parameters (precursor q-value, irradiation time, and photon flux), IRMPD subtly, but significantly outperforms resonant excitation CAD for peptides identified at a 1% false-discovery rate (FDR) from a yeast tryptic digest (95% confidence, p = 0.019). We further demonstrate that IRMPD is compatible with the analysis of isobaric-tagged peptides. Using fixed QLT RF amplitude allows for the consistent retention of reporter ions, but necessitates the use of variable IRMPD irradiation times, dependent upon precursor mass-to-charge (m/z). We show that IRMPD activation parameters can be tuned to allow for effective peptide identification and quantitation simultaneously. We thus conclude that IRMPD performed in a dual-cell ion trap is an effective option for the large-scale analysis of both unmodified and isobaric-tagged peptides. PMID:22480380
Efficient collective influence maximization in cascading processes with first-order transitions
NASA Astrophysics Data System (ADS)
Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.
2017-03-01
In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.
NASA Astrophysics Data System (ADS)
Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela
2016-10-01
Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.
Ingestion of bacterially expressed double-stranded RNA inhibits gene expression in planarians.
Newmark, Phillip A; Reddien, Peter W; Cebrià, Francesc; Sánchez Alvarado, Alejandro
2003-09-30
Freshwater planarian flatworms are capable of regenerating complete organisms from tiny fragments of their bodies; the basis for this regenerative prowess is an experimentally accessible stem cell population that is present in the adult planarian. The study of these organisms, classic experimental models for investigating metazoan regeneration, has been revitalized by the application of modern molecular biological approaches. The identification of thousands of unique planarian ESTs, coupled with large-scale whole-mount in situ hybridization screens, and the ability to inhibit planarian gene expression through double-stranded RNA-mediated genetic interference, provide a wealth of tools for studying the molecular mechanisms that regulate tissue regeneration and stem cell biology in these organisms. Here we show that, as in Caenorhabditis elegans, ingestion of bacterially expressed double-stranded RNA can inhibit gene expression in planarians. This inhibition persists throughout the process of regeneration, allowing phenotypes with disrupted regenerative patterning to be identified. These results pave the way for large-scale screens for genes involved in regenerative processes.
A coronal hole and its identification as the source of a high velocity solar wind stream
NASA Technical Reports Server (NTRS)
Krieger, A. S.; Timothy, A. F.; Roelof, E. C.
1973-01-01
X-ray images of the solar corona showed a magnetically open structure in the low corona which extended from N20W20 to the south pole. Analysis of the measured X-ray intensities shows the density scale heights within the structure to be typically a factor of two less than that in the surrounding large scale magnetically closed regions. The structure is identified as a coronal hole. Wind measurements for the appropriate period were traced back to the sun by the method of instantaneous ideal spirals. A striking agreement was found between the Carrington longitude of the solar source of a recurrent high velocity solar wind stream and the position of the hole.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaidle, Joshua A.; Habas, Susan E.; Baddour, Frederick G.
Catalyst design, from idea to commercialization, requires multi-disciplinary scientific and engineering research and development over 10-20 year time periods. Historically, the identification of new or improved catalyst materials has largely been an empirical trial-and-error process. However, advances in computational capabilities (new tools and increased processing power) coupled with new synthetic techniques have started to yield rationally-designed catalysts with controlled nano-structures and tailored properties. This technological advancement represents an opportunity to accelerate the catalyst development timeline and to deliver new materials that outperform existing industrial catalysts or enable new applications, once a number of unique challenges associated with the scale-up ofmore » nano-structured materials are overcome.« less
NASA Astrophysics Data System (ADS)
Engquist, Björn; Frederick, Christina; Huynh, Quyen; Zhou, Haomin
2017-06-01
We present a multiscale approach for identifying features in ocean beds by solving inverse problems in high frequency seafloor acoustics. The setting is based on Sound Navigation And Ranging (SONAR) imaging used in scientific, commercial, and military applications. The forward model incorporates multiscale simulations, by coupling Helmholtz equations and geometrical optics for a wide range of spatial scales in the seafloor geometry. This allows for detailed recovery of seafloor parameters including material type. Simulated backscattered data is generated using numerical microlocal analysis techniques. In order to lower the computational cost of the large-scale simulations in the inversion process, we take advantage of a pre-computed library of representative acoustic responses from various seafloor parameterizations.
NASA Technical Reports Server (NTRS)
Lathram, E. H. (Principal Investigator)
1974-01-01
The author has identified the following significant results. A pattern of very old geostructures was recognized, reflecting structures in the crust. This pattern is not peculiar to Alaska, but can be recognized throughout the northern cordillera. A new metallogenic hypothesis for Alaska was developed, based on the relationship of space image linears to known mineral deposits. Using image linear analysis, regional geologic features were also recognized; these features may be used to guide in the location of undiscovered oil and/or gas accumulations in northern Alaska. The effectiveness of ERTS data in enhancing medium and small scale mapping was demonstrated. ERTS data were also used to recognize and monitor the state of large scale vehicular scars on Arctic tundra.
ERIC Educational Resources Information Center
Mohamed, Ahmed Hassan Hemdan; Kazem, Ali Mahdi; Pfeiffer, Steven; Alzubaidi, Abdul-Qawi; Elwan, Reda Abu; Ambosaidi, Abdullah; Al-Washahi, Mariam; Al-Kharosi, Tarek
2017-01-01
Research suggests that teacher-completed gifted screening scales can reduce undernomination of students with culturally and linguistically diverse backgrounds. The purpose of this study was to examine the use of the Gifted Rating Scales-School Form (GRS-S) in the identification of gifted students in Oman. The participants of the study represented…
Implementation and evaluation of a community-based interprofessional learning activity.
Luebbers, Ellen L; Dolansky, Mary A; Vehovec, Anton; Petty, Gayle
2017-01-01
Implementation of large-scale, meaningful interprofessional learning activities for pre-licensure students has significant barriers and requires novel approaches to ensure success. To accomplish this goal, faculty at Case Western Reserve University, Ohio, USA, used the Ottawa Model of Research Use (OMRU) framework to create, improve, and sustain a community-based interprofessional learning activity for large numbers of medical students (N = 177) and nursing students (N = 154). The model guided the process and included identification of context-specific barriers and facilitators, continual monitoring and improvement using data, and evaluation of student learning outcomes as well as programme outcomes. First year Case Western Reserve University medical students and undergraduate nursing students participated in team-structured prevention screening clinics in the Cleveland Metropolitan Public School District. Identification of barriers and facilitators assisted with overcoming logistic and scheduling issues, large class size, differing ages and skill levels of students and creating sustainability. Continual monitoring led to three distinct phases of improvement and resulted in the creation of an authentic team structure, role clarification, and relevance for students. Evaluation of student learning included both qualitative and quantitative methods, resulting in statistically significant findings and qualitative themes of learner outcomes. The OMRU implementation model provided a useful framework for successful implementation resulting in a sustainable interprofessional learning activity.
NASA Astrophysics Data System (ADS)
Rochat, Bertrand
2017-04-01
High-resolution (HR) MS instruments recording HR-full scan allow analysts to go further beyond pre-acquisition choices. Untargeted acquisition can reveal unexpected compounds or concentrations and can be performed for preliminary diagnosis attempt. Then, revealed compounds will have to be identified for interpretations. Whereas the need of reference standards is mandatory to confirm identification, the diverse information collected from HRMS allows identifying unknown compounds with relatively high degree of confidence without reference standards injected in the same analytical sequence. However, there is a necessity to evaluate the degree of confidence in putative identifications, possibly before further targeted analyses. This is why a confidence scale and a score in the identification of (non-peptidic) known-unknown, defined as compounds with entries in database, is proposed for (LC-) HRMS data. The scale is based on two representative documents edited by the European Commission (2007/657/EC) and the Metabolomics Standard Initiative (MSI), in an attempt to build a bridge between the communities of metabolomics and screening labs. With this confidence scale, an identification (ID) score is determined as [a number, a letter, and a number] (e.g., 2D3), from the following three criteria: I, a General Identification Category (1, confirmed, 2, putatively identified, 3, annotated compounds/classes, and 4, unknown); II, a Chromatography Class based on the relative retention time (from the narrowest tolerance, A, to no chromatographic references, D); and III, an Identification Point Level (1, very high, 2, high, and 3, normal level) based on the number of identification points collected. Three putative identification examples of known-unknown will be presented.
Jones, K.B.; Neale, A.C.; Wade, T.G.; Wickham, J.D.; Cross, C.L.; Edmonds, C.M.; Loveland, Thomas R.; Nash, M.S.; Riitters, K.H.; Smith, E.R.
2001-01-01
Spatially explicit identification of changes in ecological conditions over large areas is key to targeting and prioritizing areas for environmental protection and restoration by managers at watershed, basin, and regional scales. A critical limitation to this point has been the development of methods to conduct such broad-scale assessments. Field-based methods have proven to be too costly and too inconsistent in their application to make estimates of ecological conditions over large areas. New spatial data derived from satellite imagery and other sources, the development of statistical models relating landscape composition and pattern to ecological endpoints, and geographic information systems (GIS) make it possible to evaluate ecological conditions at multiple scales over broad geographic regions. In this study, we demonstrate the application of spatially distributed models for bird habitat quality and nitrogen yield to streams to assess the consequences of landcover change across the mid-Atlantic region between the 1970s and 1990s. Moreover, we present a way to evaluate spatial concordance between models related to different environmental endpoints. Results of this study should help environmental managers in the mid-Atlantic region target those areas in need of conservation and protection.
Seghezzo, Lucas; Venencia, Cristian; Buliubasich, E Catalina; Iribarnegaray, Martín A; Volante, José N
2017-02-01
Conflicts over land use and ownership are common in South America and generate frequent confrontations among indigenous peoples, small-scale farmers, and large-scale agricultural producers. We argue in this paper that an accurate identification of these conflicts, together with a participatory evaluation of their importance, will increase the social legitimacy of land use planning processes, rendering decision-making more sustainable in the long term. We describe here a participatory, multi-criteria conflict assessment model developed to identify, locate, and categorize land tenure and use conflicts. The model was applied to the case of the "Chaco" region of the province of Salta, in northwestern Argentina. Basic geographic, cadastral, and social information needed to apply the model was made spatially explicit on a Geographic Information System. Results illustrate the contrasting perceptions of different stakeholders (government officials, social and environmental non-governmental organizations, large-scale agricultural producers, and scholars) on the intensity of land use conflicts in the study area. These results can help better understand and address land tenure conflicts in areas with different cultures and conflicting social and enviornmental interests.
NASA Astrophysics Data System (ADS)
Seghezzo, Lucas; Venencia, Cristian; Buliubasich, E. Catalina; Iribarnegaray, Martín A.; Volante, José N.
2017-02-01
Conflicts over land use and ownership are common in South America and generate frequent confrontations among indigenous peoples, small-scale farmers, and large-scale agricultural producers. We argue in this paper that an accurate identification of these conflicts, together with a participatory evaluation of their importance, will increase the social legitimacy of land use planning processes, rendering decision-making more sustainable in the long term. We describe here a participatory, multi-criteria conflict assessment model developed to identify, locate, and categorize land tenure and use conflicts. The model was applied to the case of the "Chaco" region of the province of Salta, in northwestern Argentina. Basic geographic, cadastral, and social information needed to apply the model was made spatially explicit on a Geographic Information System. Results illustrate the contrasting perceptions of different stakeholders (government officials, social and environmental non-governmental organizations, large-scale agricultural producers, and scholars) on the intensity of land use conflicts in the study area. These results can help better understand and address land tenure conflicts in areas with different cultures and conflicting social and enviornmental interests.
Lahuerta, Maria; Ue, Frances; Hoffman, Susie; Elul, Batya; Kulkarni, Sarah Gorrell; Wu, Yingfeng; Nuwagaba-Biribonwoha, Harriet; Remien, Robert H.; Sadr, Wafaa El; Nash, Denis
2013-01-01
Efforts to scale-up HIV care and treatment have been successful at initiating large numbers of patients onto antiretroviral therapy (ART), although persistent challenges remain to optimizing scale-up effectiveness in both resource-rich and resource-limited settings. Among the most important are very high rates of ART initiation in the advanced stages of HIV disease, which in turn drive morbidity, mortality, and onward transmission of HIV. With a focus on sub-Saharan Africa, this review article presents a conceptual framework for a broader discussion of the persistent problem of late ART initiation, including a need for more focus on the upstream precursors (late HIV diagnosis and late enrollment into HIV care) and their determinants. Without additional research and identification of multilevel interventions that successfully promote earlier initiation of ART, the problem of late ART initiation will persist, significantly undermining the long-term impact of HIV care scale-up on reducing mortality and controlling the HIV epidemic. PMID:23377739
Assessment of automatic ligand building in ARP/wARP.
Evrard, Guillaume X; Langer, Gerrit G; Perrakis, Anastassis; Lamzin, Victor S
2007-01-01
The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein-ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.
Volumetric three-component velocimetry measurements of the turbulent flow around a Rushton turbine
NASA Astrophysics Data System (ADS)
Sharp, Kendra V.; Hill, David; Troolin, Daniel; Walters, Geoffrey; Lai, Wing
2010-01-01
Volumetric three-component velocimetry measurements have been taken of the flow field near a Rushton turbine in a stirred tank reactor. This particular flow field is highly unsteady and three-dimensional, and is characterized by a strong radial jet, large tank-scale ring vortices, and small-scale blade tip vortices. The experimental technique uses a single camera head with three apertures to obtain approximately 15,000 three-dimensional vectors in a cubic volume. These velocity data offer the most comprehensive view to date of this flow field, especially since they are acquired at three Reynolds numbers (15,000, 107,000, and 137,000). Mean velocity fields and turbulent kinetic energy quantities are calculated. The volumetric nature of the data enables tip vortex identification, vortex trajectory analysis, and calculation of vortex strength. Three identification methods for the vortices are compared based on: the calculation of circumferential vorticity; the calculation of local pressure minima via an eigenvalue approach; and the calculation of swirling strength again via an eigenvalue approach. The use of two-dimensional data and three-dimensional data is compared for vortex identification; a `swirl strength' criterion is less sensitive to completeness of the velocity gradient tensor and overall provides clearer identification of the tip vortices. The principal components of the strain rate tensor are also calculated for one Reynolds number case as these measures of stretching and compression have recently been associated with tip vortex characterization. Vortex trajectories and strength compare favorably with those in the literature. No clear dependence of trajectory on Reynolds number is deduced. The visualization of tip vortices up to 140° past blade passage in the highest Reynolds number case is notable and has not previously been shown.
Describing Ecosystem Complexity through Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Shope, C. L.; Tenhunen, J. D.; Peiffer, S.
2011-12-01
Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.
Deep learning with non-medical training used for chest pathology identification
NASA Astrophysics Data System (ADS)
Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit
2015-03-01
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
Initial velocity V-shapes of young asteroid families
NASA Astrophysics Data System (ADS)
Bolin, Bryce T.; Walsh, Kevin J.; Morbidelli, Alessandro; Delbó, Marco
2018-01-01
Ejection velocity fields of asteroid families are largely unconstrained due to the fact that members disperse relatively quickly on Myr time-scales by secular resonances and the Yarkovsky effect. The spreading of fragments in a by the Yarkovsky effect is indistinguishable from the spreading caused by the initial ejection of fragments. By examining families <20 Myr old, we can use the V-shape identification technique to separate family shapes that are due to the initial ejection velocity field and those that are due to the Yarkovsky effect. Asteroid families that are <20 Myr old provide an opportunity to study the velocity field of family fragments before they become too dispersed. Only the Karin family's initial velocity field has been determined and scales inversely with diameter, D-1. We have applied the V-shape identification technique to constrain young families' initial ejection velocity fields by measuring the curvature of their fragments' V-shape correlation in semimajor axis, a, versus D-1 space. Curvature from a straight line implies a deviation from a scaling of D-1. We measure the V-shape curvature of 11 young asteroid families including the 1993 FY12, Aeolia, Brangane, Brasilia, Clarissa, Iannini, Karin, Konig, Koronis(2), Theobalda and Veritas asteroid families. We find that the majority of asteroid families have initial ejection velocity fields consistent with ∼D-1 supporting laboratory impact experiments and computer simulations of disrupting asteroid parent bodies.
Structural similitude and design of scaled down laminated models
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Rezaeepazhand, J.
1993-01-01
The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.
Bi, Jianjun; Song, Rengang; Yang, Huilan; Li, Bingling; Fan, Jianyong; Liu, Zhongrong; Long, Chaoqin
2011-01-01
Identification of immunodominant epitopes is the first step in the rational design of peptide vaccines aimed at T-cell immunity. To date, however, it is yet a great challenge for accurately predicting the potent epitope peptides from a pool of large-scale candidates with an efficient manner. In this study, a method that we named StepRank has been developed for the reliable and rapid prediction of binding capabilities/affinities between proteins and genome-wide peptides. In this procedure, instead of single strategy used in most traditional epitope identification algorithms, four steps with different purposes and thus different computational demands are employed in turn to screen the large-scale peptide candidates that are normally generated from, for example, pathogenic genome. The steps 1 and 2 aim at qualitative exclusion of typical nonbinders by using empirical rule and linear statistical approach, while the steps 3 and 4 focus on quantitative examination and prediction of the interaction energy profile and binding affinity of peptide to target protein via quantitative structure-activity relationship (QSAR) and structure-based free energy analysis. We exemplify this method through its application to binding predictions of the peptide segments derived from the 76 known open-reading frames (ORFs) of herpes simplex virus type 1 (HSV-1) genome with or without affinity to human major histocompatibility complex class I (MHC I) molecule HLA-A*0201, and find that the predictive results are well compatible with the classical anchor residue theory and perfectly match for the extended motif pattern of MHC I-binding peptides. The putative epitopes are further confirmed by comparisons with 11 experimentally measured HLA-A*0201-restrcited peptides from the HSV-1 glycoproteins D and K. We expect that this well-designed scheme can be applied in the computational screening of other viral genomes as well.
System identification through nonstationary data using Time-Frequency Blind Source Separation
NASA Astrophysics Data System (ADS)
Guo, Yanlin; Kareem, Ahsan
2016-06-01
Classical output-only system identification (SI) methods are based on the assumption of stationarity of the system response. However, measured response of buildings and bridges is usually non-stationary due to strong winds (e.g. typhoon, and thunder storm etc.), earthquakes and time-varying vehicle motions. Accordingly, the response data may have time-varying frequency contents and/or overlapping of modal frequencies due to non-stationary colored excitation. This renders traditional methods problematic for modal separation and identification. To address these challenges, a new SI technique based on Time-Frequency Blind Source Separation (TFBSS) is proposed. By selectively utilizing "effective" information in local regions of the time-frequency plane, where only one mode contributes to energy, the proposed technique can successfully identify mode shapes and recover modal responses from the non-stationary response where the traditional SI methods often encounter difficulties. This technique can also handle response with closely spaced modes which is a well-known challenge for the identification of large-scale structures. Based on the separated modal responses, frequency and damping can be easily identified using SI methods based on a single degree of freedom (SDOF) system. In addition to the exclusive advantage of handling non-stationary data and closely spaced modes, the proposed technique also benefits from the absence of the end effects and low sensitivity to noise in modal separation. The efficacy of the proposed technique is demonstrated using several simulation based studies, and compared to the popular Second-Order Blind Identification (SOBI) scheme. It is also noted that even some non-stationary response data can be analyzed by the stationary method SOBI. This paper also delineates non-stationary cases where SOBI and the proposed scheme perform comparably and highlights cases where the proposed approach is more advantageous. Finally, the performance of the proposed method is evaluated using a full-scale non-stationary response of a tall building during an earthquake and found it to perform satisfactorily.
NASA Astrophysics Data System (ADS)
Forte, Biagio; Coleman, Chris; Skone, Susan; Häggström, Ingemar; Mitchell, Cathryn; Da Dalt, Federico; Panicciari, Tommaso; Kinrade, Joe; Bust, Gary
2017-01-01
Ionospheric scintillation originates from the scattering of electromagnetic waves through spatial gradients in the plasma density distribution, drifting across a given propagation direction. Ionospheric scintillation represents a disruptive manifestation of adverse space weather conditions through degradation of the reliability and continuity of satellite telecommunication and navigation systems and services (e.g., European Geostationary Navigation Overlay Service, EGNOS). The purpose of the experiment presented here was to determine the contribution of auroral ionization structures to GPS scintillation. European Incoherent Scatter (EISCAT) measurements were obtained along the same line of sight of a given GPS satellite observed from Tromso and followed by means of the EISCAT UHF radar to causally identify plasma structures that give rise to scintillation on the co-aligned GPS radio link. Large-scale structures associated with the poleward edge of the ionospheric trough, with auroral arcs in the nightside auroral oval and with particle precipitation at the onset of a substorm were indeed identified as responsible for enhanced phase scintillation at L band. For the first time it was observed that the observed large-scale structures did not cascade into smaller-scale structures, leading to enhanced phase scintillation without amplitude scintillation. More measurements and theory are necessary to understand the mechanism responsible for the inhibition of large-scale to small-scale energy cascade and to reproduce the observations. This aspect is fundamental to model the scattering of radio waves propagating through these ionization structures. New insights from this experiment allow a better characterization of the impact that space weather can have on satellite telecommunications and navigation services.
Forte, Biagio; Coleman, Chris; Skone, Susan; Häggström, Ingemar; Mitchell, Cathryn; Da Dalt, Federico; Panicciari, Tommaso; Kinrade, Joe; Bust, Gary
2017-01-01
Ionospheric scintillation originates from the scattering of electromagnetic waves through spatial gradients in the plasma density distribution, drifting across a given propagation direction. Ionospheric scintillation represents a disruptive manifestation of adverse space weather conditions through degradation of the reliability and continuity of satellite telecommunication and navigation systems and services (e.g., European Geostationary Navigation Overlay Service, EGNOS). The purpose of the experiment presented here was to determine the contribution of auroral ionization structures to GPS scintillation. European Incoherent Scatter (EISCAT) measurements were obtained along the same line of sight of a given GPS satellite observed from Tromso and followed by means of the EISCAT UHF radar to causally identify plasma structures that give rise to scintillation on the co-aligned GPS radio link. Large-scale structures associated with the poleward edge of the ionospheric trough, with auroral arcs in the nightside auroral oval and with particle precipitation at the onset of a substorm were indeed identified as responsible for enhanced phase scintillation at L band. For the first time it was observed that the observed large-scale structures did not cascade into smaller-scale structures, leading to enhanced phase scintillation without amplitude scintillation. More measurements and theory are necessary to understand the mechanism responsible for the inhibition of large-scale to small-scale energy cascade and to reproduce the observations. This aspect is fundamental to model the scattering of radio waves propagating through these ionization structures. New insights from this experiment allow a better characterization of the impact that space weather can have on satellite telecommunications and navigation services.
NASA Technical Reports Server (NTRS)
1975-01-01
A separation method to provide reasonable yields of high specificity isoenzymes for the purpose of large scale, early clinical diagnosis of diseases and organic damage such as, myocardial infarction, hepatoma, muscular dystrophy, and infectous disorders is presented. Preliminary development plans are summarized. An analysis of required research and development and production resources is included. The costs of such resources and the potential profitability of a commercial space processing opportunity for electrophoretic separation of high specificity isoenzymes are reviewed.
Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980
NASA Astrophysics Data System (ADS)
Barbe, D. F.
1980-01-01
Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.
Identification of Curie temperature distributions in magnetic particulate systems
NASA Astrophysics Data System (ADS)
Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.
2017-09-01
This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Kauth, R. J.; Thomas, G. S.
1976-01-01
The author has identified the following significant results. A conceptual man machine system framework was created for a large scale agricultural remote sensing system. The system is based on and can grow out of the local recognition mode of LACIE, through a gradual transition wherein computer support functions supplement and replace AI functions. Local proportion estimation functions are broken into two broad classes: (1) organization of the data within the sample segment; and (2) identification of the fields or groups of fields in the sample segment.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, F. R.; Funk, C.
2014-01-01
Hidden Markov models can be used to investigate structure of subseasonal variability. East African short rain variability has connections to large-scale tropical variability. MJO - Intraseasonal variations connected with appearance of "wet" and "dry" states. ENSO/IOZM SST and circulation anomalies are apparent during years of anomalous residence time in the subseasonal "wet" state. Similar results found in previous studies, but we can interpret this with respect to variations of subseasonal wet and dry modes. Reveal underlying connections between MJO/IOZM/ENSO with respect to East African rainfall.
Deutsch, Diana; Li, Xiaonuo; Shen, Jing
2013-11-01
This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.
Association between major depressive disorder and odor identification impairment.
Khil, Laura; Rahe, Corinna; Wellmann, Jürgen; Baune, Bernhard T; Wersching, Heike; Berger, Klaus
2016-10-01
There is evidence of olfactory deficits in patients with major depressive disorder (MDD) but causes and mechanisms are largely unknown. We compared 728 patients with current MDD and 555 non-depressed controls regarding odor identification impairment taking into account the severity of acute symptoms and of the disease course. We assessed current symptom severity with the Hamilton Depression Rating Scale, and disease course severity based on admission diagnosis (ICD-10, F32/F33) and self-reported hospitalization frequency, defined as infrequent (<2) and frequent (≥2) depression-related hospitalizations under constant disease duration. A score of <10 on the Sniffin' Sticks-Screen-12 test determined the presence of odor identification impairment. Compared to non-depressed controls patients with frequent (rapidly recurring) hospitalizations had an elevated chance of odor identification impairment, even after adjustment for smell-influencing factors, such as age and smoking, (OR=1.7; 95% CI 1.0-2.9). Patients with recurrent MDD (F33) also had an elevated odds of odor identification impairment compared to those with a first-time episode (F32, OR=1.5; 95% CI 1.0-2.4). In patients with a first-time episode the chance of odor identification impairment increased by 7% with each point increase in the Hamilton Score. Cross-sectional study. Variation in the use of psychotropic medication is a potential bias. Odor identification impairment was evident in MDD patients with first-time high symptom severity and in patients with a severe disease course. Whether odor identification impairment is a marker or mediator of structural and functional brain changes associated with acute or active MDD requires further investigations in longitudinal studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Relationship of dysfunctional sport fandom with dislike for rivals in a sample of college students.
Smith, Jana; Wann, Daniel L
2006-06-01
The relationships among sport-fandom dysfunctionality (tendencies toward complaining and confrontation as assessed via the Dysfunctional Sport Fandom Scale) and items assessing team identification (assessed via the Sport Spectator Identification Scale) were examined with 87 college students (24 men, 63 women, M age=20.2 yr.). Although positive associations of dysfunction and identification were found, contrary to expectations, the relationship between dysfunction and dislike for rivals was not particularly strong.
NASA Astrophysics Data System (ADS)
Akanda, A. S.; Jutla, A.; Huq, A.; Colwell, R. R.
2014-12-01
Cholera is a global disease, with significantly large outbreaks occurring since the 1990s, notably in Sub-Saharan Africa and South Asia and recently in Haiti, in the Caribbean. Critical knowledge gaps remain in the understanding of the annual recurrence in endemic areas and the nature of epidemic outbreaks, especially those that follow extreme hydroclimatic events. Teleconnections with large-scale climate phenomena affecting regional scale hydroclimatic drivers of cholera dynamics remain largely unexplained. For centuries, the Bengal delta region has been strongly influenced by the asymmetric availability of water in the rivers Ganges and the Brahmaputra. As these two major rivers are known to have strong contrasting affects on local cholera dynamics in the region, we argue that the role of El Nino-Southern Oscillation (ENSO), Indian Ocean Dipole (IOD), or other phenomena needs to be interpreted in the context of the seasonal role of individual rivers and subsequent impact on local environmental processes, not as a teleconnection having a remote and unified effect. We present a modified hypothesis that the influences of large-scale climate phenomena such as ENSO and IOD on Bengal cholera can be explicitly identified and incorporated through regional scale hydroclimatic drivers. Here, we provide an analytical review of the literature addressing cholera and climate linkages and present hypotheses, based on recent evidence, and quantification on the role of regional scale hydroclimatic drivers of cholera. We argue that the seasonal changes in precipitation and temperature, and resulting river discharge in the GBM basin region during ENSO and IOD events have a dominant combined effect on the endemic persistence and the epidemic vulnerability to cholera outbreaks in spring and fall seasons, respectively, that is stronger than the effect of localized hydrological and socio-economic sensitivities in Bangladesh. In addition, systematic identification of underlying seasonal hydroclimatic drivers will allow us to harness the inherent system memory of these processes to develop early warning systems and strengthen prevention measures.
Zou, Lai-Quan; Zhou, Han-Yu; Lui, Simon S Y; Wang, Yi; Wang, Ya; Gan, Jun; Zhu, Xiong-Zhao; Cheung, Eric F C; Chan, Raymond C K
2018-04-20
Olfactory identification impairments have been consistently found in schizophrenia patients. However, few previous studies have investigated this in first-episode patients. There are also inconsistent findings regarding olfactory identification ability in psychometrically-defined schizotypy individuals. In this study, we directly compared the olfactory identification ability of first-episode schizophrenia patients with schizotypy individuals. The relationship between olfactory identification impairments and hedonic traits was also examined. Thirty-five first-episode schizophrenia patients, 40 schizotypy individuals as defined by the Chapman's Anhedonia Scales and 40 demographically matched controls were recruited. The University of Pennsylvania Smell Identification Test was administered. Hedonic capacity was assessed using the Temporal Experience of Pleasure Scale (TEPS). The results showed that both the schizophrenia and schizotypy groups showed poorer olfactory identification ability than controls, and the impairment was significantly correlated with reduced pleasure experiences. Our findings support olfactory identification impairment as a trait marker for schizophrenia. Copyright © 2018 Elsevier Inc. All rights reserved.
Dropout Proneness in Appalachia. Research Series 3.
ERIC Educational Resources Information Center
Mink, Oscar G.; Barker, Laurence W.
Two aids used in the identification of potential dropouts are examined. The Mink Scale (a teacher-rated scale) is based on classification of social, psychological, and educational forces related to dropout proneness: (1) academic ability and performance, (2) negative identification with education, (3) family and socioeconomic status, and (4)…
Piton, Amélie; Redin, Claire; Mandel, Jean-Louis
2013-01-01
Because of the unbalanced sex ratio (1.3–1.4 to 1) observed in intellectual disability (ID) and the identification of large ID-affected families showing X-linked segregation, much attention has been focused on the genetics of X-linked ID (XLID). Mutations causing monogenic XLID have now been reported in over 100 genes, most of which are commonly included in XLID diagnostic gene panels. Nonetheless, the boundary between true mutations and rare non-disease-causing variants often remains elusive. The sequencing of a large number of control X chromosomes, required for avoiding false-positive results, was not systematically possible in the past. Such information is now available thanks to large-scale sequencing projects such as the National Heart, Lung, and Blood (NHLBI) Exome Sequencing Project, which provides variation information on 10,563 X chromosomes from the general population. We used this NHLBI cohort to systematically reassess the implication of 106 genes proposed to be involved in monogenic forms of XLID. We particularly question the implication in XLID of ten of them (AGTR2, MAGT1, ZNF674, SRPX2, ATP6AP2, ARHGEF6, NXF5, ZCCHC12, ZNF41, and ZNF81), in which truncating variants or previously published mutations are observed at a relatively high frequency within this cohort. We also highlight 15 other genes (CCDC22, CLIC2, CNKSR2, FRMPD4, HCFC1, IGBP1, KIAA2022, KLF8, MAOA, NAA10, NLGN3, RPL10, SHROOM4, ZDHHC15, and ZNF261) for which replication studies are warranted. We propose that similar reassessment of reported mutations (and genes) with the use of data from large-scale human exome sequencing would be relevant for a wide range of other genetic diseases. PMID:23871722
Genome-scale identification of Legionella pneumophila effectors using a machine learning approach.
Burstein, David; Zusman, Tal; Degtyar, Elena; Viner, Ram; Segal, Gil; Pupko, Tal
2009-07-01
A large number of highly pathogenic bacteria utilize secretion systems to translocate effector proteins into host cells. Using these effectors, the bacteria subvert host cell processes during infection. Legionella pneumophila translocates effectors via the Icm/Dot type-IV secretion system and to date, approximately 100 effectors have been identified by various experimental and computational techniques. Effector identification is a critical first step towards the understanding of the pathogenesis system in L. pneumophila as well as in other bacterial pathogens. Here, we formulate the task of effector identification as a classification problem: each L. pneumophila open reading frame (ORF) was classified as either effector or not. We computationally defined a set of features that best distinguish effectors from non-effectors. These features cover a wide range of characteristics including taxonomical dispersion, regulatory data, genomic organization, similarity to eukaryotic proteomes and more. Machine learning algorithms utilizing these features were then applied to classify all the ORFs within the L. pneumophila genome. Using this approach we were able to predict and experimentally validate 40 new effectors, reaching a success rate of above 90%. Increasing the number of validated effectors to around 140, we were able to gain novel insights into their characteristics. Effectors were found to have low G+C content, supporting the hypothesis that a large number of effectors originate via horizontal gene transfer, probably from their protozoan host. In addition, effectors were found to cluster in specific genomic regions. Finally, we were able to provide a novel description of the C-terminal translocation signal required for effector translocation by the Icm/Dot secretion system. To conclude, we have discovered 40 novel L. pneumophila effectors, predicted over a hundred additional highly probable effectors, and shown the applicability of machine learning algorithms for the identification and characterization of bacterial pathogenesis determinants.
Munitions related feature extraction from LIDAR data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Barry L.
2010-06-01
The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniquesmore » for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.« less
A knowledge-based approach to identification and adaptation in dynamical systems control
NASA Technical Reports Server (NTRS)
Glass, B. J.; Wong, C. M.
1988-01-01
Artificial intelligence techniques are applied to the problems of model form and parameter identification of large-scale dynamic systems. The object-oriented knowledge representation is discussed in the context of causal modeling and qualitative reasoning. Structured sets of rules are used for implementing qualitative component simulations, for catching qualitative discrepancies and quantitative bound violations, and for making reconfiguration and control decisions that affect the physical system. These decisions are executed by backward-chaining through a knowledge base of control action tasks. This approach was implemented for two examples: a triple quadrupole mass spectrometer and a two-phase thermal testbed. Results of tests with both of these systems demonstrate that the software replicates some or most of the functionality of a human operator, thereby reducing the need for a human-in-the-loop in the lower levels of control of these complex systems.
Hybrid propulsion technology program
NASA Technical Reports Server (NTRS)
1990-01-01
Technology was identified which will enable application of hybrid propulsion to manned and unmanned space launch vehicles. Two design concepts are proposed. The first is a hybrid propulsion system using the classical method of regression (classical hybrid) resulting from the flow of oxidizer across a fuel grain surface. The second system uses a self-sustaining gas generator (gas generator hybrid) to produce a fuel rich exhaust that was mixed with oxidizer in a separate combustor. Both systems offer cost and reliability improvement over the existing solid rocket booster and proposed liquid boosters. The designs were evaluated using life cycle cost and reliability. The program consisted of: (1) identification and evaluation of candidate oxidizers and fuels; (2) preliminary evaluation of booster design concepts; (3) preparation of a detailed point design including life cycle costs and reliability analyses; (4) identification of those hybrid specific technologies needing improvement; and (5) preperation of a technology acquisition plan and large scale demonstration plan.
Kalyanaraman, Ananth; Cannon, William R; Latt, Benjamin; Baxter, Douglas J
2011-11-01
A MapReduce-based implementation called MR-MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs. The source code along with user documentation are available on http://compbio.eecs.wsu.edu/MR-MSPolygraph. ananth@eecs.wsu.edu; william.cannon@pnnl.gov. Supplementary data are available at Bioinformatics online.
Dereplication of peptidic natural products through database search of mass spectra
Mohimani, Hosein; Gurevich, Alexey; Mikheenko, Alla; Garg, Neha; Nothias, Louis-Felix; Ninomiya, Akihiro; Takada, Kentaro; Dorrestein, Pieter C.; Pevzner, Pavel A.
2016-01-01
Peptidic Natural Products (PNPs) are widely used compounds that include many antibiotics and a variety of other bioactive peptides. While recent breakthroughs in PNP discovery raised the challenge of developing new algorithms for their analysis, identification of PNPs via database search of tandem mass spectra remains an open problem. To address this problem, natural product researchers utilize dereplication strategies that identify known PNPs and lead to the discovery of new ones even in cases when the reference spectra are not present in existing spectral libraries. DEREPLICATOR is a new dereplication algorithm that enabled high-throughput PNP identification and that is compatible with large-scale mass spectrometry-based screening platforms for natural product discovery. After searching nearly one hundred million tandem mass spectra in the Global Natural Products Social (GNPS) molecular networking infrastructure, DEREPLICATOR identified an order of magnitude more PNPs (and their new variants) than any previous dereplication efforts. PMID:27820803
Conservation genetics and genomics of amphibians and reptiles.
Shaffer, H Bradley; Gidiş, Müge; McCartney-Melstad, Evan; Neal, Kevin M; Oyamaguchi, Hilton M; Tellez, Marisa; Toffelmier, Erin M
2015-01-01
Amphibians and reptiles as a group are often secretive, reach their greatest diversity often in remote tropical regions, and contain some of the most endangered groups of organisms on earth. Particularly in the past decade, genetics and genomics have been instrumental in the conservation biology of these cryptic vertebrates, enabling work ranging from the identification of populations subject to trade and exploitation, to the identification of cryptic lineages harboring critical genetic variation, to the analysis of genes controlling key life history traits. In this review, we highlight some of the most important ways that genetic analyses have brought new insights to the conservation of amphibians and reptiles. Although genomics has only recently emerged as part of this conservation tool kit, several large-scale data sources, including full genomes, expressed sequence tags, and transcriptomes, are providing new opportunities to identify key genes, quantify landscape effects, and manage captive breeding stocks of at-risk species.
Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned
NASA Technical Reports Server (NTRS)
Heck, M. L.; Findlay, J. T.; Compton, H. R.
1983-01-01
The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.
Lam, Siew Hong; Mathavan, Sinnakarupan; Tong, Yan; Li, Haixia; Karuturi, R. Krishna Murthy; Wu, Yilian; Vega, Vinsensius B.; Liu, Edison T.; Gong, Zhiyuan
2008-01-01
The ability to perform large-scale, expression-based chemogenomics on whole adult organisms, as in invertebrate models (worm and fly), is highly desirable for a vertebrate model but its feasibility and potential has not been demonstrated. We performed expression-based chemogenomics on the whole adult organism of a vertebrate model, the zebrafish, and demonstrated its potential for large-scale predictive and discovery chemical biology. Focusing on two classes of compounds with wide implications to human health, polycyclic (halogenated) aromatic hydrocarbons [P(H)AHs] and estrogenic compounds (ECs), we generated robust prediction models that can discriminate compounds of the same class from those of different classes in two large independent experiments. The robust expression signatures led to the identification of biomarkers for potent aryl hydrocarbon receptor (AHR) and estrogen receptor (ER) agonists, respectively, and were validated in multiple targeted tissues. Knowledge-based data mining of human homologs of zebrafish genes revealed highly conserved chemical-induced biological responses/effects, health risks, and novel biological insights associated with AHR and ER that could be inferred to humans. Thus, our study presents an effective, high-throughput strategy of capturing molecular snapshots of chemical-induced biological states of a whole adult vertebrate that provides information on biomarkers of effects, deregulated signaling pathways, and possible affected biological functions, perturbed physiological systems, and increased health risks. These findings place zebrafish in a strategic position to bridge the wide gap between cell-based and rodent models in chemogenomics research and applications, especially in preclinical drug discovery and toxicology. PMID:18618001
Reduced-order model for underwater target identification using proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ramesh, Sai Sudha; Lim, Kian Meng
2017-03-01
Research on underwater acoustics has seen major development over the past decade due to its widespread applications in domains such as underwater communication/navigation (SONAR), seismic exploration and oceanography. In particular, acoustic signatures from partially or fully buried targets can be used in the identification of buried mines for mine counter measures (MCM). Although there exist several techniques to identify target properties based on SONAR images and acoustic signatures, these methods first employ a feature extraction method to represent the dominant characteristics of a data set, followed by the use of an appropriate classifier based on neural networks or the relevance vector machine. The aim of the present study is to demonstrate the applications of proper orthogonal decomposition (POD) technique in capturing dominant features of a set of scattered pressure signals, and subsequent use of the POD modes and coefficients in the identification of partially buried underwater target parameters such as its location, size and material density. Several numerical examples are presented to demonstrate the performance of the system identification method based on POD. Although the present study is based on 2D acoustic model, the method can be easily extended to 3D models and thereby enables cost-effective representations of large-scale data.
Synthesizing spatiotemporally sparse smartphone sensor data for bridge modal identification
NASA Astrophysics Data System (ADS)
Ozer, Ekin; Feng, Maria Q.
2016-08-01
Smartphones as vibration measurement instruments form a large-scale, citizen-induced, and mobile wireless sensor network (WSN) for system identification and structural health monitoring (SHM) applications. Crowdsourcing-based SHM is possible with a decentralized system granting citizens with operational responsibility and control. Yet, citizen initiatives introduce device mobility, drastically changing SHM results due to uncertainties in the time and the space domains. This paper proposes a modal identification strategy that fuses spatiotemporally sparse SHM data collected by smartphone-based WSNs. Multichannel data sampled with the time and the space independence is used to compose the modal identification parameters such as frequencies and mode shapes. Structural response time history can be gathered by smartphone accelerometers and converted into Fourier spectra by the processor units. Timestamp, data length, energy to power conversion address temporal variation, whereas spatial uncertainties are reduced by geolocation services or determining node identity via QR code labels. Then, parameters collected from each distributed network component can be extended to global behavior to deduce modal parameters without the need of a centralized and synchronous data acquisition system. The proposed method is tested on a pedestrian bridge and compared with a conventional reference monitoring system. The results show that the spatiotemporally sparse mobile WSN data can be used to infer modal parameters despite non-overlapping sensor operation schedule.
NASA Astrophysics Data System (ADS)
Lautredou, A.-C.; Bonillo, C.; Denys, G.; Cruaud, C.; Ozouf-Costaz, C.; Lecointre, G.; Dettai, A.
2010-08-01
The Trematominae are a particularly interesting subfamily within the antarctic suborder Notothenioidei (Teleostei). The 14 closely related species occupy a large range of ecological of niches, extremely useful for evolutionary and biogeography studies in the Antarctic Ocean. But some Trematomus species can be difficult to identify by using morphological criteria, specially young stages and damaged specimens. Molecular identification would therefore be highly useful, however the suitability of the cytochrome oxidase I gene in a barcoding approach needs to be assessed. We evaluated species delineation within the genus Trematomus comparing morphological identification, nuclear markers (the rhodopsin retrogene and a new nuclear marker pkd1: polycystic kidney disease 1) and COI. We show that Trematomus vicarius is not distinguishable from Trematomus bernacchii with the molecular markers used, and neither is Trematomus loennbergii from Trematomus lepidorhinus. We suggest that until this is investigated further, studies including these species list them as T. loennbergii/ T. lepidorhinus group, and keep voucher samples and specimens. Generally, COI gives a congruent result with the rhodopsin retrogene, and except for the previously cited species pairs, COI barcoding is efficient for identification in this group. Moreover pkd1 might not be suitable for a phylogenetic study at this scale for this group.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe
2017-04-01
Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Liu, Yun; Wang, Huixiang; Liu, Qingping; Qu, Haiyun; Liu, Baohong; Yang, Pengyuan
2010-11-07
A microfluidic reactor has been developed for rapid enhancement of protein digestion by constructing an alumina network within a poly(ethylene terephthalate) (PET) microchannel. Trypsin is stably immobilized in a sol-gel network on the PET channel surface after pretreatment, which produces a protein-resistant interface to reduce memory effects, as characterized by X-ray fluorescence spectrometry and electroosmotic flow. The gel-derived network within a microchannel provides a large surface-to-volume ratio stationary phase for highly efficient proteolysis of proteins existing both at a low level and in complex extracts. The maximum reaction rate of the encapsulated trypsin reactor, measured by kinetic analysis, is much faster than in bulk solution. Due to the microscopic confinement effect, high levels of enzyme entrapment and the biocompatible microenvironment provided by the alumina gel network, the low-level proteins can be efficiently digested using such a microreactor within a very short residence time of a few seconds. The on-chip microreactor is further applied to the identification of a mixture of proteins extracted from normal mouse liver cytoplasm sample via integration with 2D-LC-ESI-MS/MS to show its potential application for large-scale protein identification.
Free-decay time-domain modal identification for large space structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.
1992-01-01
Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.
Teacher Report versus Adaptive Behavior Scale in Assessment of Mental Retardation.
ERIC Educational Resources Information Center
Al-Ansari, Ahmed
1993-01-01
This study assessed the degree of agreement between teacher report and an adapted Adaptive Behavior Scale in the identification of mental retardation and associated learning difficulties in 257 young Bahraini school children. Findings indicated that the instrument is sensitive in identification of children with mental retardation and exhibits high…
National Identification of Dutch Youth: An Exploratory Study
ERIC Educational Resources Information Center
Oppenheimer, Louis
2011-01-01
246 Dutch participants aged 8, 10, 12, 14, and 16 years were presented with the Strength of Identification Scale (SoIS; Barrett, 2007) and the National Identity scale based on Cultural and Historical achievements (NICH; derived from the NATID, Keillor & Hult, 1999). The study aimed to examine the extent and nature of Dutch children and…
Genome-wide SNP identification and QTL mapping for black rot resistance in cabbage.
Lee, Jonghoon; Izzah, Nur Kholilatul; Jayakodi, Murukarthick; Perumal, Sampath; Joh, Ho Jun; Lee, Hyeon Ju; Lee, Sang-Choon; Park, Jee Young; Yang, Ki-Woung; Nou, Il-Sup; Seo, Joodeok; Yoo, Jaeheung; Suh, Youngdeok; Ahn, Kyounggu; Lee, Ji Hyun; Choi, Gyung Ja; Yu, Yeisoo; Kim, Heebal; Yang, Tae-Jin
2015-02-03
Black rot is a destructive bacterial disease causing large yield and quality losses in Brassica oleracea. To detect quantitative trait loci (QTL) for black rot resistance, we performed whole-genome resequencing of two cabbage parental lines and genome-wide SNP identification using the recently published B. oleracea genome sequences as reference. Approximately 11.5 Gb of sequencing data was produced from each parental line. Reference genome-guided mapping and SNP calling revealed 674,521 SNPs between the two cabbage lines, with an average of one SNP per 662.5 bp. Among 167 dCAPS markers derived from candidate SNPs, 117 (70.1%) were validated as bona fide SNPs showing polymorphism between the parental lines. We then improved the resolution of a previous genetic map by adding 103 markers including 87 SNP-based dCAPS markers. The new map composed of 368 markers and covers 1467.3 cM with an average interval of 3.88 cM between adjacent markers. We evaluated black rot resistance in the mapping population in three independent inoculation tests using F2:3 progenies and identified one major QTL and three minor QTLs. We report successful utilization of whole-genome resequencing for large-scale SNP identification and development of molecular markers for genetic map construction. In addition, we identified novel QTLs for black rot resistance. The high-density genetic map will promote QTL analysis for other important agricultural traits and marker-assisted breeding of B. oleracea.
Network-assisted target identification for haploinsufficiency and homozygous profiling screens
Wang, Sheng
2017-01-01
Chemical genomic screens have recently emerged as a systematic approach to drug discovery on a genome-wide scale. Drug target identification and elucidation of the mechanism of action (MoA) of hits from these noisy high-throughput screens remain difficult. Here, we present GIT (Genetic Interaction Network-Assisted Target Identification), a network analysis method for drug target identification in haploinsufficiency profiling (HIP) and homozygous profiling (HOP) screens. With the drug-induced phenotypic fitness defect of the deletion of a gene, GIT also incorporates the fitness defects of the gene’s neighbors in the genetic interaction network. On three genome-scale yeast chemical genomic screens, GIT substantially outperforms previous scoring methods on target identification on HIP and HOP assays, respectively. Finally, we showed that by combining HIP and HOP assays, GIT further boosts target identification and reveals potential drug’s mechanism of action. PMID:28574983
Deep JVLA Imaging of GOODS-N at 20 cm
NASA Astrophysics Data System (ADS)
Owen, Frazer N.
2018-04-01
New wideband continuum observations in the 1–2 GHz band of the GOODS-N field using NSF’s Karl G. Jansky Very Large Array (VLA) are presented. The best image with an effective frequency of 1525 MHz reaches an rms noise in the field center of 2.2 μJy, with 1.″6 resolution. A catalog of 795 sources is presented covering a radius of 9 arcminutes centered near the nominal center for the GOODS-N field, very near the nominal VLA pointing center for the observations. Optical/NIR identifications and redshift estimates both from ground-based and HST observations are discussed. Using these optical/NIR data, it is most likely that fewer than 2% of the sources without confusion problems do not have a correct identification. A large subset of the detected sources have radio sizes >1″. It is shown that the radio orientations for such sources correlate well with the HST source orientations, especially for z < 1. This suggests that a least a large subset of the 10 kpc-scale disks of luminous infrared/ultraluminous infrared galaxies (LIRG/ULIRG) have strong star formation, not just in the nucleus. For the half of the objects with z > 1, the sample must be some mixture of very high star formation rates, typically 300 M ⊙ yr‑1, assuming pure star formation, and an active galactic nucleus (AGN) or a mixed AGN/star formation population.
O'Connor, Ben L; Hamada, Yuki; Bowen, Esther E; Grippo, Mark A; Hartmann, Heidi M; Patton, Terri L; Van Lonkhuyzen, Robert A; Carr, Adrianne E
2014-11-01
Large areas of public lands administered by the Bureau of Land Management and located in arid regions of the southwestern United States are being considered for the development of utility-scale solar energy facilities. Land-disturbing activities in these desert, alluvium-filled valleys have the potential to adversely affect the hydrologic and ecologic functions of ephemeral streams. Regulation and management of ephemeral streams typically falls under a spectrum of federal, state, and local programs, but scientifically based guidelines for protecting ephemeral streams with respect to land-development activities are largely nonexistent. This study developed an assessment approach for quantifying the sensitivity to land disturbance of ephemeral stream reaches located in proposed solar energy zones (SEZs). The ephemeral stream assessment approach used publicly-available geospatial data on hydrology, topography, surficial geology, and soil characteristics, as well as high-resolution aerial imagery. These datasets were used to inform a professional judgment-based score index of potential land disturbance impacts on selected critical functions of ephemeral streams, including flow and sediment conveyance, ecological habitat value, and groundwater recharge. The total sensitivity scores (sum of scores for the critical stream functions of flow and sediment conveyance, ecological habitats, and groundwater recharge) were used to identify highly sensitive stream reaches to inform decisions on developable areas in SEZs. Total sensitivity scores typically reflected the scores of the individual stream functions; some exceptions pertain to groundwater recharge and ecological habitats. The primary limitations of this assessment approach were the lack of high-resolution identification of ephemeral stream channels in the existing National Hydrography Dataset, and the lack of mechanistic processes describing potential impacts on ephemeral stream functions at the watershed scale. The primary strength of this assessment approach is that it allows watershed-scale planning for low-impact development in arid ecosystems; the qualitative scoring of potential impacts can also be adjusted to accommodate new geospatial data, and to allow for expert and stakeholder input into decisions regarding the identification and potential avoidance of highly sensitive stream reaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Ben L.; Hamada, Yuki; Bowen, Esther E.
2014-08-17
Large areas of public lands administered by the Bureau of Land Management and located in arid regions of the southwestern United States are being considered for the development of utility-scale solar energy facilities. Land-disturbing activities in these desert, alluvium-filled valleys have the potential to adversely affect the hydrologic and ecologic functions of ephemeral streams. Regulation and management of ephemeral streams typically falls under a spectrum of federal, state, and local programs, but scientifically based guidelines for protecting ephemeral streams with respect to land-development activities are largely nonexistent. This study developed an assessment approach for quantifying the sensitivity to land disturbancemore » of ephemeral stream reaches located in proposed solar energy zones (SEZs). The ephemeral stream assessment approach used publicly-available geospatial data on hydrology, topography, surficial geology, and soil characteristics, as well as highresolution aerial imagery. These datasets were used to inform a professional judgment-based score index of potential land disturbance impacts on selected critical functions of ephemeral streams, including flow and sediment conveyance, ecological habitat value, and groundwater recharge. The total sensitivity scores (sum of scores for the critical stream functions of flow and sediment conveyance, ecological habitats, and groundwater recharge) were used to identify highly sensitive stream reaches to inform decisions on developable areas in SEZs. Total sensitivity scores typically reflected the scores of the individual stream functions; some exceptions pertain to groundwater recharge and ecological habitats. The primary limitations of this assessment approach were the lack of high-resolution identification of ephemeral stream channels in the existing National Hydrography Dataset, and the lack of mechanistic processes describing potential impacts on ephemeral stream functions at the watershed scale.The primary strength of this assessment approach is that it allows watershed-scale planning for low-impact development in arid ecosystems; the qualitative scoring of potential impacts can also be adjusted to accommodate new geospatial data, and to allow for expert and stakeholder input into decisions regarding the identification and potential avoidance of highly sensitive stream reaches.« less
NASA Astrophysics Data System (ADS)
Sadhu, A.; Narasimhan, S.; Antoni, J.
2017-09-01
Output-only modal identification has seen significant activity in recent years, especially in large-scale structures where controlled input force generation is often difficult to achieve. This has led to the development of new system identification methods which do not require controlled input. They often work satisfactorily if they satisfy some general assumptions - not overly restrictive - regarding the stochasticity of the input. Hundreds of papers covering a wide range of applications appear every year related to the extraction of modal properties from output measurement data in more than two dozen mechanical, aerospace and civil engineering journals. In little more than a decade, concepts of blind source separation (BSS) from the field of acoustic signal processing have been adopted by several researchers and shown that they can be attractive tools to undertake output-only modal identification. Originally intended to separate distinct audio sources from a mixture of recordings, mathematical equivalence to problems in linear structural dynamics have since been firmly established. This has enabled many of the developments in the field of BSS to be modified and applied to output-only modal identification problems. This paper reviews over hundred articles related to the application of BSS and their variants to output-only modal identification. The main contribution of the paper is to present a literature review of the papers which have appeared on the subject. While a brief treatment of the basic ideas are presented where relevant, a comprehensive and critical explanation of their contents is not attempted. Specific issues related to output-only modal identification and the relative advantages and limitations of BSS methods both from theoretical and application standpoints are discussed. Gap areas requiring additional work are also summarized and the paper concludes with possible future trends in this area.
Zhao, Henry; Pesavento, Lauren; Coote, Skye; Rodrigues, Edrich; Salvaris, Patrick; Smith, Karen; Bernard, Stephen; Stephenson, Michael; Churilov, Leonid; Yassi, Nawaf; Davis, Stephen M; Campbell, Bruce C V
2018-04-01
Clinical triage scales for prehospital recognition of large vessel occlusion (LVO) are limited by low specificity when applied by paramedics. We created the 3-step ambulance clinical triage for acute stroke treatment (ACT-FAST) as the first algorithmic LVO identification tool, designed to improve specificity by recognizing only severe clinical syndromes and optimizing paramedic usability and reliability. The ACT-FAST algorithm consists of (1) unilateral arm drift to stretcher <10 seconds, (2) severe language deficit (if right arm is weak) or gaze deviation/hemineglect assessed by simple shoulder tap test (if left arm is weak), and (3) eligibility and stroke mimic screen. ACT-FAST examination steps were retrospectively validated, and then prospectively validated by paramedics transporting culturally and linguistically diverse patients with suspected stroke in the emergency department, for the identification of internal carotid or proximal middle cerebral artery occlusion. The diagnostic performance of the full ACT-FAST algorithm was then validated for patients accepted for thrombectomy. In retrospective (n=565) and prospective paramedic (n=104) validation, ACT-FAST displayed higher overall accuracy and specificity, when compared with existing LVO triage scales. Agreement of ACT-FAST between paramedics and doctors was excellent (κ=0.91; 95% confidence interval, 0.79-1.0). The full ACT-FAST algorithm (n=60) assessed by paramedics showed high overall accuracy (91.7%), sensitivity (85.7%), specificity (93.5%), and positive predictive value (80%) for recognition of endovascular-eligible LVO. The 3-step ACT-FAST algorithm shows higher specificity and reliability than existing scales for clinical LVO recognition, despite requiring just 2 examination steps. The inclusion of an eligibility step allowed recognition of endovascular-eligible patients with high accuracy. Using a sequential algorithmic approach eliminates scoring confusion and reduces assessment time. Future studies will test whether field application of ACT-FAST by paramedics to bypass suspected patients with LVO directly to endovascular-capable centers can reduce delays to endovascular thrombectomy. © 2018 American Heart Association, Inc.
High-throughput screening and small animal models, where are we?
Giacomotto, Jean; Ségalat, Laurent
2010-01-01
Current high-throughput screening methods for drug discovery rely on the existence of targets. Moreover, most of the hits generated during screenings turn out to be invalid after further testing in animal models. To by-pass these limitations, efforts are now being made to screen chemical libraries on whole animals. One of the most commonly used animal model in biology is the murine model Mus musculus. However, its cost limit its use in large-scale therapeutic screening. In contrast, the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the fish Danio rerio are gaining momentum as screening tools. These organisms combine genetic amenability, low cost and culture conditions that are compatible with large-scale screens. Their main advantage is to allow high-throughput screening in a whole-animal context. Moreover, their use is not dependent on the prior identification of a target and permits the selection of compounds with an improved safety profile. This review surveys the versatility of these animal models for drug discovery and discuss the options available at this day. PMID:20423335
Randazzo, Cinzia L; Russo, Nunziatina; Pino, Alessandra; Mazzaglia, Agata; Ferrante, Margherita; Conti, Gea Oliveri; Caggia, Cinzia
2018-05-01
This work investigates the effects of different combinations of selected lactic acid bacteria strains on Lactobacillus species occurrence, on safety and on sensory traits of natural green table olives, produced at large factory scale. Olives belonging to Nocellara Etnea cv were processed in a 6% NaCl brine and inoculated with six different bacterial cultures, using selected strains belonging to Lactobacillus plantarum, Lactobacillus paracasei and Lactobacillus pentosus species. The fermentation process was strongly influenced by the added starters and the identification of lactic acid bacteria isolated throughout the process confirms that L. pentosus dominated all fermentations, followed by L. plantarum, whereas L. casei was never detected. Pathogens were never found, while histamine and tyrosine were detected in control and in two experimental samples. The samples with the lowest final pH values showed a safer profile and the most appreciated sensory traits. The present study highlights that selected starters promote prevalence of L. pentosus over the autochthonous microbiota throughout the whole process of Nocellara Etnea olives. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Ruzhich, Valery V.; Psakhie, Sergey G.; Levina, Elena A.; Shilko, Evgeny V.; Grigoriev, Alexandr S.
2017-12-01
In the paper we briefly outline the experience in forecasting catastrophic earthquakes and the general problems in ensuring seismic safety. The purpose of our long-term research is the development and improvement of the methods of man-caused impacts on large-scale fault segments to safely reduce the negative effect of seismodynamic failure. Various laboratory and large-scale field experiments were carried out in the segments of tectonic faults in Baikal rift zone and in main cracks in block-structured ice cove of Lake Baikal using the developed measuring systems and special software for identification and treatment of deformation response of faulty segments to man-caused impacts. The results of the study let us to ground the necessity of development of servo-controlled technologies, which are able to provide changing the shear resistance and deformation regime of fault zone segments by applying vibrational and pulse triggering impacts. We suppose that the use of triggering impacts in highly stressed segments of active faults will promote transferring the geodynamic state of these segments from a metastable to a more stable and safe state.
Recent advances on biological production of difructose dianhydride III.
Zhu, Yingying; Yu, Shuhuai; Zhang, Wenli; Zhang, Tao; Guang, Cuie; Mu, Wanmeng
2018-04-01
Difructose dianhydride III (DFA III) is a cyclic difructose containing two reciprocal glycosidic linkages. It is easily generated with a small amount by sucrose caramelization and thus occurs in a wide range of food-stuffs during food processing. DFA III has half sweetness but only 1/15 energy of sucrose, showing potential industrial application as low-calorie sucrose substitute. In addition, it displays many benefits including prebiotic effect, low cariogenicity property, and hypocholesterolemic effect, and improves absorption of minerals, flavonoids, and immunoglobulin G. DFA III is biologically produced from inulin by inulin fructotransferase (IFTase, EC 4.2.2.18). Plenty of DFA III-producing enzymes have been identified. The crystal structure of inulin fructotransferase has been determined, and its molecular modification has been performed to improve the catalytic activity and structural stability. Large-scale production of DFA III has been studied by various IFTases, especially using an ultrafiltration membrane bioreactor. In this article, the recent findings on physiological effects of DFA III are briefly summarized; the research progresses on identification, expression, and molecular modification of IFTase and large-scale biological production of DFA III by IFTase are reviewed in detail.
STRIDE: Species Tree Root Inference from Gene Duplication Events.
Emms, David M; Kelly, Steven
2017-12-01
The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Discovering and understanding oncogenic gene fusions through data intensive computational approaches
Latysheva, Natasha S.; Babu, M. Madan
2016-01-01
Abstract Although gene fusions have been recognized as important drivers of cancer for decades, our understanding of the prevalence and function of gene fusions has been revolutionized by the rise of next-generation sequencing, advances in bioinformatics theory and an increasing capacity for large-scale computational biology. The computational work on gene fusions has been vastly diverse, and the present state of the literature is fragmented. It will be fruitful to merge three camps of gene fusion bioinformatics that appear to rarely cross over: (i) data-intensive computational work characterizing the molecular biology of gene fusions; (ii) development research on fusion detection tools, candidate fusion prioritization algorithms and dedicated fusion databases and (iii) clinical research that seeks to either therapeutically target fusion transcripts and proteins or leverages advances in detection tools to perform large-scale surveys of gene fusion landscapes in specific cancer types. In this review, we unify these different—yet highly complementary and symbiotic—approaches with the view that increased synergy will catalyze advancements in gene fusion identification, characterization and significance evaluation. PMID:27105842
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
NASA Technical Reports Server (NTRS)
Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.
1993-01-01
A study was performed to determine the feasibility of establishing if a 'shaped' sonic boom signature, experimentally shown in wind tunnel models out to about 10 body lengths, will persist out to representative flight conditions of 200 to 300 body lengths. The study focuses on the use of a relatively large supersonic remotely-piloted and recoverable vehicle. Other simulation methods that may accomplish the objective are also addressed and include the use of nonrecoverable target drones, missiles, full-scale drones, very large wind tunnels, ballistic facilities, whirling-arm techniques, rocket sled tracks, and airplane nose probes. In addition, this report will also present a background on the origin of the feasibility study including a brief review of the equivalent body concept, a listing of the basic sonic boom signature characteristics and requirements, identification of candidate vehicles in terms of desirable features/availability, and vehicle characteristics including geometries, area distributions, and resulting sonic boom signatures. A program is developed that includes wind tunnel sonic boom and force models and tests for both a basic and modified vehicles and full-scale flight tests.
Bentzen, Amalie Kai; Marquard, Andrea Marion; Lyngaa, Rikke; Saini, Sunil Kumar; Ramskov, Sofie; Donia, Marco; Such, Lina; Furness, Andrew J S; McGranahan, Nicholas; Rosenthal, Rachel; Straten, Per Thor; Szallasi, Zoltan; Svane, Inge Marie; Swanton, Charles; Quezada, Sergio A; Jakobsen, Søren Nyboe; Eklund, Aron Charles; Hadrup, Sine Reker
2016-10-01
Identification of the peptides recognized by individual T cells is important for understanding and treating immune-related diseases. Current cytometry-based approaches are limited to the simultaneous screening of 10-100 distinct T-cell specificities in one sample. Here we use peptide-major histocompatibility complex (MHC) multimers labeled with individual DNA barcodes to screen >1,000 peptide specificities in a single sample, and detect low-frequency CD8 T cells specific for virus- or cancer-restricted antigens. When analyzing T-cell recognition of shared melanoma antigens before and after adoptive cell therapy in melanoma patients, we observe a greater number of melanoma-specific T-cell populations compared with cytometry-based approaches. Furthermore, we detect neoepitope-specific T cells in tumor-infiltrating lymphocytes and peripheral blood from patients with non-small cell lung cancer. Barcode-labeled pMHC multimers enable the combination of functional T-cell analysis with large-scale epitope recognition profiling for the characterization of T-cell recognition in various diseases, including in small clinical samples.
Globalization and human cooperation
Buchan, Nancy R.; Grimalda, Gianluca; Wilson, Rick; Brewer, Marilynn; Fatas, Enrique; Foddy, Margaret
2009-01-01
Globalization magnifies the problems that affect all people and that require large-scale human cooperation, for example, the overharvesting of natural resources and human-induced global warming. However, what does globalization imply for the cooperation needed to address such global social dilemmas? Two competing hypotheses are offered. One hypothesis is that globalization prompts reactionary movements that reinforce parochial distinctions among people. Large-scale cooperation then focuses on favoring one's own ethnic, racial, or language group. The alternative hypothesis suggests that globalization strengthens cosmopolitan attitudes by weakening the relevance of ethnicity, locality, or nationhood as sources of identification. In essence, globalization, the increasing interconnectedness of people worldwide, broadens the group boundaries within which individuals perceive they belong. We test these hypotheses by measuring globalization at both the country and individual levels and analyzing the relationship between globalization and individual cooperation with distal others in multilevel sequential cooperation experiments in which players can contribute to individual, local, and/or global accounts. Our samples were drawn from the general populations of the United States, Italy, Russia, Argentina, South Africa, and Iran. We find that as country and individual levels of globalization increase, so too does individual cooperation at the global level vis-à-vis the local level. In essence, “globalized” individuals draw broader group boundaries than others, eschewing parochial motivations in favor of cosmopolitan ones. Globalization may thus be fundamental in shaping contemporary large-scale cooperation and may be a positive force toward the provision of global public goods. PMID:19255433
Krojer, Tobias; Talon, Romain; Pearce, Nicholas; Collins, Patrick; Douangamath, Alice; Brandao-Neto, Jose; Dias, Alexandre; Marsden, Brian; von Delft, Frank
2017-03-01
XChemExplorer (XCE) is a data-management and workflow tool to support large-scale simultaneous analysis of protein-ligand complexes during structure-based ligand discovery (SBLD). The user interfaces of established crystallographic software packages such as CCP4 [Winn et al. (2011), Acta Cryst. D67, 235-242] or PHENIX [Adams et al. (2010), Acta Cryst. D66, 213-221] have entrenched the paradigm that a `project' is concerned with solving one structure. This does not hold for SBLD, where many almost identical structures need to be solved and analysed quickly in one batch of work. Functionality to track progress and annotate structures is essential. XCE provides an intuitive graphical user interface which guides the user from data processing, initial map calculation, ligand identification and refinement up until data dissemination. It provides multiple entry points depending on the need of each project, enables batch processing of multiple data sets and records metadata, progress and annotations in an SQLite database. XCE is freely available and works on any Linux and Mac OS X system, and the only dependency is to have the latest version of CCP4 installed. The design and usage of this tool are described here, and its usefulness is demonstrated in the context of fragment-screening campaigns at the Diamond Light Source. It is routinely used to analyse projects comprising 1000 data sets or more, and therefore scales well to even very large ligand-design projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H.-W.; Chang, N.-B., E-mail: nchang@mail.ucf.ed; Chen, J.-C.
2010-07-15
Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19more » large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.« less
NASA Astrophysics Data System (ADS)
Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.
2018-03-01
Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.
Efficient data management in a large-scale epidemiology research project.
Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang
2012-09-01
This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Globalization and human cooperation.
Buchan, Nancy R; Grimalda, Gianluca; Wilson, Rick; Brewer, Marilynn; Fatas, Enrique; Foddy, Margaret
2009-03-17
Globalization magnifies the problems that affect all people and that require large-scale human cooperation, for example, the overharvesting of natural resources and human-induced global warming. However, what does globalization imply for the cooperation needed to address such global social dilemmas? Two competing hypotheses are offered. One hypothesis is that globalization prompts reactionary movements that reinforce parochial distinctions among people. Large-scale cooperation then focuses on favoring one's own ethnic, racial, or language group. The alternative hypothesis suggests that globalization strengthens cosmopolitan attitudes by weakening the relevance of ethnicity, locality, or nationhood as sources of identification. In essence, globalization, the increasing interconnectedness of people worldwide, broadens the group boundaries within which individuals perceive they belong. We test these hypotheses by measuring globalization at both the country and individual levels and analyzing the relationship between globalization and individual cooperation with distal others in multilevel sequential cooperation experiments in which players can contribute to individual, local, and/or global accounts. Our samples were drawn from the general populations of the United States, Italy, Russia, Argentina, South Africa, and Iran. We find that as country and individual levels of globalization increase, so too does individual cooperation at the global level vis-à-vis the local level. In essence, "globalized" individuals draw broader group boundaries than others, eschewing parochial motivations in favor of cosmopolitan ones. Globalization may thus be fundamental in shaping contemporary large-scale cooperation and may be a positive force toward the provision of global public goods.
Krojer, Tobias; Talon, Romain; Pearce, Nicholas; Douangamath, Alice; Brandao-Neto, Jose; Dias, Alexandre; Marsden, Brian
2017-01-01
XChemExplorer (XCE) is a data-management and workflow tool to support large-scale simultaneous analysis of protein–ligand complexes during structure-based ligand discovery (SBLD). The user interfaces of established crystallographic software packages such as CCP4 [Winn et al. (2011 ▸), Acta Cryst. D67, 235–242] or PHENIX [Adams et al. (2010 ▸), Acta Cryst. D66, 213–221] have entrenched the paradigm that a ‘project’ is concerned with solving one structure. This does not hold for SBLD, where many almost identical structures need to be solved and analysed quickly in one batch of work. Functionality to track progress and annotate structures is essential. XCE provides an intuitive graphical user interface which guides the user from data processing, initial map calculation, ligand identification and refinement up until data dissemination. It provides multiple entry points depending on the need of each project, enables batch processing of multiple data sets and records metadata, progress and annotations in an SQLite database. XCE is freely available and works on any Linux and Mac OS X system, and the only dependency is to have the latest version of CCP4 installed. The design and usage of this tool are described here, and its usefulness is demonstrated in the context of fragment-screening campaigns at the Diamond Light Source. It is routinely used to analyse projects comprising 1000 data sets or more, and therefore scales well to even very large ligand-design projects. PMID:28291762
Expanding the user base beyond HEP for the Ganga distributed analysis user interface
NASA Astrophysics Data System (ADS)
Currie, R.; Egede, U.; Richards, A.; Slater, M.; Williams, M.
2017-10-01
This document presents the result of recent developments within Ganga[1] project to support users from new communities outside of HEP. In particular I will examine the case of users from the Large Scale Survey Telescope (LSST) group looking to use resources provided by the UK based GridPP[2][3] DIRAC[4][5] instance. An example use case is work performed with users from the LSST Virtual Organisation (VO) to distribute the workflow used for galaxy shape identification analyses. This work highlighted some LSST specific challenges which could be well solved by common tools within the HEP community. As a result of this work the LSST community was able to take advantage of GridPP[2][3] resources to perform large computing tasks within the UK.
Convergence between biological, behavioural and genetic determinants of obesity.
Ghosh, Sujoy; Bouchard, Claude
2017-12-01
Multiple biological, behavioural and genetic determinants or correlates of obesity have been identified to date. Genome-wide association studies (GWAS) have contributed to the identification of more than 100 obesity-associated genetic variants, but their roles in causal processes leading to obesity remain largely unknown. Most variants are likely to have tissue-specific regulatory roles through joint contributions to biological pathways and networks, through changes in gene expression that influence quantitative traits, or through the regulation of the epigenome. The recent availability of large-scale functional genomics resources provides an opportunity to re-examine obesity GWAS data to begin elucidating the function of genetic variants. Interrogation of knockout mouse phenotype resources provides a further avenue to test for evidence of convergence between genetic variation and biological or behavioural determinants of obesity.
Dynamic Identification for Control of Large Space Structures
NASA Technical Reports Server (NTRS)
Ibrahim, S. R.
1985-01-01
This is a compilation of reports by the one author on one subject. It consists of the following five journal articles: (1) A Parametric Study of the Ibrahim Time Domain Modal Identification Algorithm; (2) Large Modal Survey Testing Using the Ibrahim Time Domain Identification Technique; (3) Computation of Normal Modes from Identified Complex Modes; (4) Dynamic Modeling of Structural from Measured Complex Modes; and (5) Time Domain Quasi-Linear Identification of Nonlinear Dynamic Systems.
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Dubra, Alfredo; Tam, Johnny
2016-03-01
Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.
Perspectives on the role of mobility, behavior, and time scales in the spread of diseases.
Castillo-Chavez, Carlos; Bichara, Derdei; Morin, Benjamin R
2016-12-20
The dynamics, control, and evolution of communicable and vector-borne diseases are intimately connected to the joint dynamics of epidemiological, behavioral, and mobility processes that operate across multiple spatial, temporal, and organizational scales. The identification of a theoretical explanatory framework that accounts for the pattern regularity exhibited by a large number of host-parasite systems, including those sustained by host-vector epidemiological dynamics, is but one of the challenges facing the coevolving fields of computational, evolutionary, and theoretical epidemiology. Host-parasite epidemiological patterns, including epidemic outbreaks and endemic recurrent dynamics, are characteristic to well-identified regions of the world; the result of processes and constraints such as strain competition, host and vector mobility, and population structure operating over multiple scales in response to recurrent disturbances (like El Niño) and climatological and environmental perturbations over thousands of years. It is therefore important to identify and quantify the processes responsible for observed epidemiological macroscopic patterns: the result of individual interactions in changing social and ecological landscapes. In this perspective, we touch on some of the issues calling for the identification of an encompassing theoretical explanatory framework by identifying some of the limitations of existing theory, in the context of particular epidemiological systems. Fostering the reenergizing of research that aims at disentangling the role of epidemiological and socioeconomic forces on disease dynamics, better understood as complex adaptive systems, is a key aim of this perspective.
Estimating the Efficiency of Phosphopeptide Identification by Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Hsu, Chuan-Chih; Xue, Liang; Arrington, Justine V.; Wang, Pengcheng; Paez Paez, Juan Sebastian; Zhou, Yuan; Zhu, Jian-Kang; Tao, W. Andy
2017-06-01
Mass spectrometry has played a significant role in the identification of unknown phosphoproteins and sites of phosphorylation in biological samples. Analyses of protein phosphorylation, particularly large scale phosphoproteomic experiments, have recently been enhanced by efficient enrichment, fast and accurate instrumentation, and better software, but challenges remain because of the low stoichiometry of phosphorylation and poor phosphopeptide ionization efficiency and fragmentation due to neutral loss. Phosphoproteomics has become an important dimension in systems biology studies, and it is essential to have efficient analytical tools to cover a broad range of signaling events. To evaluate current mass spectrometric performance, we present here a novel method to estimate the efficiency of phosphopeptide identification by tandem mass spectrometry. Phosphopeptides were directly isolated from whole plant cell extracts, dephosphorylated, and then incubated with one of three purified kinases—casein kinase II, mitogen-activated protein kinase 6, and SNF-related protein kinase 2.6—along with 16O4- and 18O4-ATP separately for in vitro kinase reactions. Phosphopeptides were enriched and analyzed by LC-MS. The phosphopeptide identification rate was estimated by comparing phosphopeptides identified by tandem mass spectrometry with phosphopeptide pairs generated by stable isotope labeled kinase reactions. Overall, we found that current high speed and high accuracy mass spectrometers can only identify 20%-40% of total phosphopeptides primarily due to relatively poor fragmentation, additional modifications, and low abundance, highlighting the urgent need for continuous efforts to improve phosphopeptide identification efficiency. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Kawamoto, Hirokazu; Takayasu, Hideki; Takayasu, Misako
We analyze the typical characteristics of the percolation transition of a large-scale complex network, a Japanese business relation network consisting of approximately 600,000 nodes and 4,000,000 links. By utilizing percolation characteristics, we revise the definition of network survival rate that we previously proposed. The new network survival rate has a strong correlation with the old one. The calculation cost is also much smaller and the number of trials decreases from 100,000 to 1,000. Finally, we discuss the identification of robust and fragile regions using this index.
NASA Technical Reports Server (NTRS)
Kraal, E. R.; Moore, J. M.; Howard, A. D.; Asphaug, E. A.
2005-01-01
Moore and Howard [1] reported the discovery of large alluvial fans in craters on Mars. Their initial survey from 0-30 S found that these fans clustered in three distinct regions and occurred at around the +1 km MOLA defined Mars datum. However, due to incomplete image coverage, Moore and Howard [1]could not conduct a comprehensive survey. They also recognized, though did not quantitatively address, gravity scaling issues. Here, we briefly discuss the identification of alluvial fans on Mars, then consider the general equations governing the deposition of alluvial fans and hypothesize a method for learning about grain size in alluvial fans on Mars.
NASA Astrophysics Data System (ADS)
Shikunova, Irina A.; Zaytsev, Kirill I.; Stryukov, Dmitrii O.; Dubyanskaya, Evgenia N.; Kurlov, Vladimir N.
2017-07-01
In this paper, a handheld contact probe based on sapphire shaped crystal was developed for the intraoperative optical diagnosis and aspiration of malignant brain tissue combined with the laser hemostasis. Such a favorable combination of several functions in a single instrument significantly increases its clinical relevance. It makes possible highly-accurate real-time detection and removal of either large-scale malignancies or even separate invasive cancer cells. The proposed neuroprobe was integrated into the clinical neurosurgical workflow for the intraoperative fluorescence identification and removal of malignant tissues of the brain.
Unravelling the hidden ancestry of American admixed populations.
Montinaro, Francesco; Busby, George B J; Pascali, Vincenzo L; Myers, Simon; Hellenthal, Garrett; Capelli, Cristian
2015-03-24
The movement of people into the Americas has brought different populations into contact, and contemporary American genomes are the product of a range of complex admixture events. Here we apply a haplotype-based ancestry identification approach to a large set of genome-wide SNP data from a variety of American, European and African populations to determine the contributions of different ancestral populations to the Americas. Our results provide a fine-scale characterization of the source populations, identify a series of novel, previously unreported contributions from Africa and Europe and highlight geohistorical structure in the ancestry of American admixed populations.
Stable isotope dimethyl labelling for quantitative proteomics and beyond
Hsu, Jue-Liang; Chen, Shu-Hui
2016-01-01
Stable-isotope reductive dimethylation, a cost-effective, simple, robust, reliable and easy-to- multiplex labelling method, is widely applied to quantitative proteomics using liquid chromatography-mass spectrometry. This review focuses on biological applications of stable-isotope dimethyl labelling for a large-scale comparative analysis of protein expression and post-translational modifications based on its unique properties of the labelling chemistry. Some other applications of the labelling method for sample preparation and mass spectrometry-based protein identification and characterization are also summarized. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644970
The formation of giant low surface brightness galaxies
NASA Technical Reports Server (NTRS)
Hoffman, Yehuda; Silk, Joseph; Wyse, Rosemary F. G.
1992-01-01
It is demonstrated that the initial structure of galaxies can be strongly affected by their large-scale environments. In particular, rare (about 3 sigma) massive galaxies in voids will have normal bulges, but unevolved, extended disks; it is proposed that the low surface brightness objects Malin I and Malin II are prototypes of this class of object. The model predicts that searches for more examples of 'crouching giants' should be fruitful, but that such galaxies do not provide a substantial fraction of mass in the universe. The identification of dwarf galaxies is relatively unaffected by their environment.
Determination of spectral signatures of substances in natural waters
NASA Technical Reports Server (NTRS)
Klemas, V.; Philpot, W. D.; Davis, G.
1978-01-01
Optical remote sensing of water pollution offers the possibility of fast, large scale coverage at a relatively low cost. The possibility of using the spectral characteristics of the upwelling light from water for the purpose of ocean water quality monitoring was explained. The work was broken into several broad tasks as follows: (1) definition of a remotely measured spectral signature of water, (2) collection of field data and testing of the signature analysis, and (3) the possibility of using LANDSAT data for the identification of substances in water. An attempt to extract spectral signatures of acid waste and sediment was successful.
NASA Astrophysics Data System (ADS)
Franca, Mário J.; Lemmin, Ulrich
2014-05-01
The occurrence of large scale flow structures (LSFS) coherently organized throughout the flow depth has been reported in field and laboratory experiments of flows over gravel beds, especially under low relative submergence conditions. In these, the instantaneous velocity is synchronized over the whole vertical profile oscillating at a low frequency above or below the time-averaged value. The detection of large scale coherently organized regions in the flow field is often difficult since it requires detailed simultaneous observations of the flow velocities at several levels. The present research avoids the detection problem by using an Acoustic Doppler Velocity Profiler (ADVP), which permits measuring three-dimensional velocities quasi-simultaneously over the full water column. Empirical mode decomposition (EMD) combined with the application of the Hilbert transform is then applied to the instantaneous velocity data to detect and isolate LSFS. The present research was carried out in a Swiss river with low relative submergence of 2.9, herein defined as h/D50, (where h is the mean flow depth and D50 the bed grain size diameter for which 50% of the grains have smaller diameters). 3D ADVP instantaneous velocity measurements were made on a 3x5 rectangular horizontal grid (x-y). Fifteen velocity profiles were equally spaced in the spanwise direction with a distance of 10 cm, and in the streamwise direction with a distance of 15 cm. The vertical resolution of the measurements is roughly 0.5 cm. A measuring grid covering a 3D control volume was defined. The instantaneous velocity profiles were measured for 3.5 min with a sampling frequency of 26 Hz. Oscillating LSFS are detected and isolated in the instantaneous velocity signal of the 15 measured profiles. Their 3D cycle geometry is reconstructed and investigated through phase averaging based on the identification of the instantaneous signal phase (related to the Hilbert transform) applied to the original raw signal. Results for all the profiles are consistent and indicate clearly the presence of LSFS throughout the flow depth with impact on the three components of the velocity profile and on the bed friction velocity. A high correlation of the movement is found throughout the flow depth, thus corroborating the hypothesis of large-scale coherent motion evolving over the whole water depth. These latter are characterized in terms of period, horizontal scale and geometry. The high spatial and temporal resolution of our ADVP was crucial for obtaining comprehensive results on coherent structures dynamics. EMD combined with the Hilbert transform have previously been successfully applied to geophysical flow studies. Here we show that this method can also be used for the analysis of river dynamics. In particular, we demonstrate that a clean, well-behaved intrinsic mode function can be obtained from a noisy velocity time series that allowed a precise determination of the vertical structure of the coherent structures. The phase unwrapping of the UMR and the identification of the phase related velocity components brings new insight into the flow dynamics Research supported by the Swiss National Science Foundation (2000-063818). KEY WORDS: large scale flow structures (LSFS); gravel-bed rivers; empirical mode decomposition; Hilbert transform
Geiger, M F; Herder, F; Monaghan, M T; Almada, V; Barbieri, R; Bariche, M; Berrebi, P; Bohlen, J; Casal-Lopez, M; Delmastro, G B; Denys, G P J; Dettai, A; Doadrio, I; Kalogianni, E; Kärst, H; Kottelat, M; Kovačić, M; Laporte, M; Lorenzoni, M; Marčić, Z; Özuluğ, M; Perdices, A; Perea, S; Persat, H; Porcelotti, S; Puzzi, C; Robalo, J; Šanda, R; Schneider, M; Šlechtová, V; Stoumboudi, M; Walter, S; Freyhof, J
2014-11-01
Incomplete knowledge of biodiversity remains a stumbling block for conservation planning and even occurs within globally important Biodiversity Hotspots (BH). Although technical advances have boosted the power of molecular biodiversity assessments, the link between DNA sequences and species and the analytics to discriminate entities remain crucial. Here, we present an analysis of the first DNA barcode library for the freshwater fish fauna of the Mediterranean BH (526 spp.), with virtually complete species coverage (498 spp., 98% extant species). In order to build an identification system supporting conservation, we compared species determination by taxonomists to multiple clustering analyses of DNA barcodes for 3165 specimens. The congruence of barcode clusters with morphological determination was strongly dependent on the method of cluster delineation, but was highest with the general mixed Yule-coalescent (GMYC) model-based approach (83% of all species recovered as GMYC entity). Overall, genetic morphological discontinuities suggest the existence of up to 64 previously unrecognized candidate species. We found reduced identification accuracy when using the entire DNA-barcode database, compared with analyses on databases for individual river catchments. This scale effect has important implications for barcoding assessments and suggests that fairly simple identification pipelines provide sufficient resolution in local applications. We calculated Evolutionarily Distinct and Globally Endangered scores in order to identify candidate species for conservation priority and argue that the evolutionary content of barcode data can be used to detect priority species for future IUCN assessments. We show that large-scale barcoding inventories of complex biotas are feasible and contribute directly to the evaluation of conservation priorities. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
de Vries, A. J.; Ouwersloot, H. G.; Feldstein, S. B.; Riemer, M.; El Kenawy, A. M.; McCabe, M. F.; Lelieveld, J.
2018-01-01
Extreme precipitation events in the otherwise arid Middle East can cause flooding with dramatic socioeconomic impacts. Most of these events are associated with tropical-extratropical interactions, whereby a stratospheric potential vorticity (PV) intrusion reaches deep into the subtropics and forces an incursion of high poleward vertically integrated water vapor transport (IVT) into the Middle East. This study presents an object-based identification method for extreme precipitation events based on the combination of these two larger-scale meteorological features. The general motivation for this approach is that precipitation is often poorly simulated in relatively coarse weather and climate models, whereas the synoptic-scale circulation is much better represented. The algorithm is applied to ERA-Interim reanalysis data (1979-2015) and detects 90% (83%) of the 99th (97.5th) percentile of extreme precipitation days in the region of interest. Our results show that stratospheric PV intrusions and IVT structures are intimately connected to extreme precipitation intensity and seasonality. The farther south a stratospheric PV intrusion reaches, the larger the IVT magnitude, and the longer the duration of their combined occurrence, the more extreme the precipitation. Our algorithm detects a large fraction of the climatological rainfall amounts (40-70%), heavy precipitation days (50-80%), and the top 10 extreme precipitation days (60-90%) at many sites in southern Israel and the northern and western parts of Saudi Arabia. This identification method provides a new tool for future work to disentangle teleconnections, assess medium-range predictability, and improve understanding of climatic changes of extreme precipitation in the Middle East and elsewhere.
Fitzpatrick, Stephanie L; Hill-Briggs, Felicia
2015-10-01
Identification of patients with poor chronic disease self-management skills can facilitate treatment planning, determine effectiveness of interventions, and reduce disease complications. This paper describes the use of a Rasch model, the Rating Scale Model, to examine psychometric properties of the 50-item Health Problem-Solving Scale (HPSS) among 320 African American patients with high risk for cardiovascular disease. Items on the positive/effective HPSS subscales targeted patients at low, moderate, and high levels of positive/effective problem solving, whereas items on the negative/ineffective problem solving subscales mostly targeted those at moderate or high levels of ineffective problem solving. Validity was examined by correlating factor scores on the measure with clinical and behavioral measures. Items on the HPSS show promise in the ability to assess health-related problem solving among high risk patients. However, further revisions of the scale are needed to increase its usability and validity with large, diverse patient populations in the future.
Imaging and identification of waterborne parasites using a chip-scale microscope.
Lee, Seung Ah; Erath, Jessey; Zheng, Guoan; Ou, Xiaoze; Willems, Phil; Eichinger, Daniel; Rodriguez, Ana; Yang, Changhuei
2014-01-01
We demonstrate a compact portable imaging system for the detection of waterborne parasites in resource-limited settings. The previously demonstrated sub-pixel sweeping microscopy (SPSM) technique is a lens-less imaging scheme that can achieve high-resolution (<1 µm) bright-field imaging over a large field-of-view (5.7 mm×4.3 mm). A chip-scale microscope system, based on the SPSM technique, can be used for automated and high-throughput imaging of protozoan parasite cysts for the effective diagnosis of waterborne enteric parasite infection. We successfully imaged and identified three major types of enteric parasite cysts, Giardia, Cryptosporidium, and Entamoeba, which can be found in fecal samples from infected patients. We believe that this compact imaging system can serve well as a diagnostic device in challenging environments, such as rural settings or emergency outbreaks.
Imager for Mars Pathfinder (IMF)
NASA Technical Reports Server (NTRS)
Smith, Peter H.
1994-01-01
The IMP camera is a near-surface sensing experiment with many capabilities beyond those normally associated with an imager. It is fully pointable in both elevation and azimuth with a protected, stowed position looking straight down. Stereo separation is provided with two optical paths; each has a 12-position filter wheel. The primary function of the camera, strongly tied to mission success, is to take a color panorama of the surrounding terrain. IMP requires approximately 120 images to give a complete downward hemisphere from the deployed position. IMP provides the geologist, and everyone else, a view of the local morphology with millimeter-tometer-scale resolution over a broad area. In addition to the general morphology of the scale, IMP has a large compliment of specially chosen filters to aid in both the identification of the mineral types and their degree of weathering.
Identification of novel diagnostic biomarkers for thyroid carcinoma.
Wang, Xiliang; Zhang, Qing; Cai, Zhiming; Dai, Yifan; Mou, Lisha
2017-12-19
Thyroid carcinoma (THCA) is the most universal endocrine malignancy worldwide. Unfortunately, a limited number of large-scale analyses have been performed to identify biomarkers for THCA. Here, we conducted a meta-analysis using 505 THCA patients and 59 normal controls from The Cancer Genome Atlas. After identifying differentially expressed long non-coding RNA (lncRNA) and protein coding genes (PCG), we found vast difference in various lncRNA-PCG co-expressed pairs in THCA. A dysregulation network with scale-free topology was constructed. Four molecules (LA16c-380H5.2, RP11-203J24.8, MLF1 and SDC4) could potentially serve as diagnostic biomarkers of THCA with high sensitivity and specificity. We further represent a diagnostic panel with expression cutoff values. Our results demonstrate the potential application of those four molecules as novel independent biomarkers for THCA diagnosis.
Analyzing large-scale proteomics projects with latent semantic indexing.
Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning
2008-01-01
Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.
Transition probabilities in neutron-rich Se,8280 and the role of the ν g9 /2 orbital
NASA Astrophysics Data System (ADS)
Litzinger, J.; Blazhev, A.; Dewald, A.; Didierjean, F.; Duchêne, G.; Fransen, C.; Lozeva, R.; Verney, D.; de Angelis, G.; Bazzacco, D.; Birkenbach, B.; Bottoni, S.; Bracco, A.; Braunroth, T.; Cederwall, B.; Corradi, L.; Crespi, F. C. L.; Désesquelles, P.; Eberth, J.; Ellinger, E.; Farnea, E.; Fioretto, E.; Gernhäuser, R.; Goasduff, A.; Görgen, A.; Gottardo, A.; Grebosz, J.; Hackstein, M.; Hess, H.; Ibrahim, F.; Jolie, J.; Jungclaus, A.; Kolos, K.; Korten, W.; Leoni, S.; Lunardi, S.; Maj, A.; Menegazzo, R.; Mengoni, D.; Michelagnoli, C.; Mijatovic, T.; Million, B.; Möller, O.; Modamio, V.; Montagnoli, G.; Montanari, D.; Morales, A. I.; Napoli, D. R.; Niikura, M.; Pietralla, N.; Pollarolo, G.; Pullia, A.; Quintana, B.; Recchia, F.; Reiter, P.; Rosso, D.; Sahin, E.; Salsac, M. D.; Scarlassara, F.; Söderström, P.-A.; Stefanini, A. M.; Stezowski, O.; Szilner, S.; Theisen, Ch.; Valiente-Dobón, J. J.; Vandone, V.; Vogt, A.
2018-04-01
Transition probabilities of intermediate-spin yrast and non-yrast excitations in Se,8280 were investigated in a recoil distance Doppler-shift (RDDS) experiment performed at the Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro. The Cologne Plunger device for deep inelastic scattering was used for the RDDS technique and was combined with the AGATA Demonstrator array for the γ -ray detection and coupled to the PRISMA magnetic spectrometer for an event-by-event particle identification. In 80Se, the level lifetimes of the yrast (61+) and (81+) states and of a non-yrast band feeding the yrast 41+ state are determined. A spin and parity assignment of the head of this sideband is discussed based on the experimental results and supported by large-scale shell-model calculations. In 82Se, the level lifetimes of the yrast 61+ state and the yrare 42+ state and lifetime limits of the yrast (101+) state and of the 51- state are determined. Although the experimental results contain large uncertainties, they are interpreted with care in terms of large-scale shell-model calculations using the effective interactions JUN45 and jj44b. The excited states' wave functions are investigated and discussed with respect to the role of the neutron g9 /2 orbital.
Mining Large Scale Tandem Mass Spectrometry Data for Protein Modifications Using Spectral Libraries.
Horlacher, Oliver; Lisacek, Frederique; Müller, Markus
2016-03-04
Experimental improvements in post-translational modification (PTM) detection by tandem mass spectrometry (MS/MS) has allowed the identification of vast numbers of PTMs. Open modification searches (OMSs) of MS/MS data, which do not require prior knowledge of the modifications present in the sample, further increased the diversity of detected PTMs. Despite much effort, there is still a lack of functional annotation of PTMs. One possibility to narrow the annotation gap is to mine MS/MS data deposited in public repositories and to correlate the PTM presence with biological meta-information attached to the data. Since the data volume can be quite substantial and contain tens of millions of MS/MS spectra, the data mining tools must be able to cope with big data. Here, we present two tools, Liberator and MzMod, which are built using the MzJava class library and the Apache Spark large scale computing framework. Liberator builds large MS/MS spectrum libraries, and MzMod searches them in an OMS mode. We applied these tools to a recently published set of 25 million spectra from 30 human tissues and present tissue specific PTMs. We also compared the results to the ones obtained with the OMS tool MODa and the search engine X!Tandem.
Basu, Sumanta; Duren, William; Evans, Charles R; Burant, Charles F; Michailidis, George; Karnovsky, Alla
2017-05-15
Recent technological advances in mass spectrometry, development of richer mass spectral libraries and data processing tools have enabled large scale metabolic profiling. Biological interpretation of metabolomics studies heavily relies on knowledge-based tools that contain information about metabolic pathways. Incomplete coverage of different areas of metabolism and lack of information about non-canonical connections between metabolites limits the scope of applications of such tools. Furthermore, the presence of a large number of unknown features, which cannot be readily identified, but nonetheless can represent bona fide compounds, also considerably complicates biological interpretation of the data. Leveraging recent developments in the statistical analysis of high-dimensional data, we developed a new Debiased Sparse Partial Correlation algorithm (DSPC) for estimating partial correlation networks and implemented it as a Java-based CorrelationCalculator program. We also introduce a new version of our previously developed tool Metscape that enables building and visualization of correlation networks. We demonstrate the utility of these tools by constructing biologically relevant networks and in aiding identification of unknown compounds. http://metscape.med.umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
The Geology of Mars as Seen by MRO's HiRISE
NASA Astrophysics Data System (ADS)
McEwen, A. S.
2007-12-01
By September 2007 the High Resolution Imaging Science Experiment (HiRISE) had acquired more than 3,000 images of Mars at resolutions as high as 25 cm/pixel in the 3 PM mapping orbit of Mars Reconnaissance Orbiter, covering about 0.2 percent of the surface. These images are helping to address a broad range of science issues, as presented in dozens of abstracts to this conference. In this talk I will focus on several topics. (1) The color data is proving quite valuable to reduce the ambiguities of B&W images; to correlate deposits and better define the stratigraphy; and to extend mineral identifications to the scale of outcrops. (2) The nature of the Martian highlands is being revealed, with the identification of megabreccia, hydrous minerals (by OMEGA and CRISM spectrometers), and the detailed nature of the layered or massive stratigraphy where exposed in cross- section. (3) There is new evidence for the roles of water in the most recent large (at least 1 km diameter) impact craters, which may have implications for the altered mineralogy of the ancient crust. (4) New observations and measurements are leading to improved understanding of slope processes such as gullies, creep, and mass wasting. We are producing meter-scale digital elevation models to test high-priority science questions.
Morphological evidence for discrete stocks of yellow perch in Lake Erie
Kocovsky, Patrick M.; Knight, Carey T.
2012-01-01
Identification and management of unique stocks of exploited fish species are high-priority management goals in the Laurentian Great Lakes. We analyzed whole-body morphometrics of 1430 yellow perch Perca flavescens captured during 2007–2009 from seven known spawning areas in Lake Erie to determine if morphometrics vary among sites and management units to assist in identification of spawning stocks of this heavily exploited species. Truss-based morphometrics (n = 21 measurements) were analyzed using principal component analysis followed by ANOVA of the first three principal components to determine whether yellow perch from the several sampling sites varied morphometrically. Duncan's multiple range test was used to determine which sites differed from one another to test whether morphometrics varied at scales finer than management unit. Morphometrics varied significantly among sites and annually, but differences among sites were much greater. Sites within the same management unit typically differed significantly from one another, indicating morphometric variation at a scale finer than management unit. These results are largely congruent with recently-published studies on genetic variation of yellow perch from many of the same sampling sites. Thus, our results provide additional evidence that there are discrete stocks of yellow perch in Lake Erie and that management units likely comprise multiple stocks.
Stimulus Picture Identification in Articulation Testing
ERIC Educational Resources Information Center
Mullen, Patricia A.; Whitehead, Robert L.
1977-01-01
Compared with 20 normal speaking and 20 articulation defective Ss (7 and 8 years old) was the percent of correct initial identification of stimulus pictures on the Goldman-Fristoe Test of Articulation with the percent correct identification on the Arizona Articulation Proficiency Scale. (Author/IM)
Hind, Jacqueline A.; Gensler, Gary; Brandt, Diane K.; Miller Gardner, Patricia J.; Blumenthal, Loreen; Gramigna, Gary D.; Kosek, Steven; Lundy, Donna; McGarvey-Toler, Susan; Rockafellow, Susan; Sullivan, Paula A.; Villa, Marybell; Gill, Gary D.; Lindblad, Anne S.; Logemann, Jeri A.; Robbins, JoAnne
2009-01-01
Accurate detection and classification of aspiration is a critical component of videofluoroscopic swallowing evaluation, the most commonly utilized instrumental method for dysphagia diagnosis and treatment. Currently published literature indicates that inter-judge reliability for the identification of aspiration ranges from poor to fairly good depending on the amount of training provided to clinicians. The majority of extant studies compared judgments among clinicians. No studies included judgments made during the use of a postural compensatory strategy. The purpose of this study was to examine the accuracy of judgments made by speech-language pathologists (SLPs) practicing in hospitals compared with unblinded expert judges when identifying aspiration and using the 8-point Penetration/Aspiration Scale. Clinicians received extensive training for the detection of aspiration and minimal training on use of the Penetration/Aspiration Scale. Videofluoroscopic data were collected from 669 patients as part of a large, randomized clinical trial and include judgments of 10,200 swallows made by 76 clinicians from 44 hospitals in 11 states. Judgments were made on swallows during use of dysphagia compensatory strategies: chin down posture with thin-liquids and thickened liquids (nectar-thick and honey-thick consistencies) in a head neutral posture. The subject population included patients with Parkinson’s disease and/or dementia. Kappa statistics indicate high accuracy for all interventions by SLPs for identification of aspiration (all К > .86) and variable accuracy (range 69%–76%) using the Penetration/Aspiration Scale when compared to expert judges. It is concluded that while the accuracy of identifying the presence of aspiration by SLPs is excellent, more extensive training and/or image enhancement is recommended for precise use of the Penetration/Aspiration Scale. PMID:18953607
NASA Astrophysics Data System (ADS)
Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.
2017-12-01
We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.
ERIC Educational Resources Information Center
Beltyukova, Svetlana A.; Stone, Gregory M.; Ellis, Lee W.
2008-01-01
Purpose: Speech intelligibility research typically relies on traditional evidence of reliability and validity. This investigation used Rasch analysis to enhance understanding of the functioning and meaning of scores obtained with 2 commonly used procedures: word identification (WI) and magnitude estimation scaling (MES). Method: Narrative samples…
Lot quality assurance sampling for screening communities hyperendemic for Schistosoma mansoni.
Rabarijaona, L P; Boisier, P; Ravaoalimalala, V E; Jeanne, I; Roux, J F; Jutand, M A; Salamon, R
2003-04-01
Lot quality assurance sampling (LQAS) was evaluated for rapid low cost identification of communities where Schistosoma mansoni infection was hyperendemic in southern Madagascar. In the study area, S. mansoni infection shows very focused and heterogeneous distribution requiring multifariousness of local surveys. One sampling plan was tested in the field with schoolchildren and several others were simulated in the laboratory. Randomization and stool specimen collection were performed by voluntary teachers under direct supervision of the study staff and no significant problem occurred. As expected from Receiver Operating Characteristic (ROC) curves, all sampling plans allowed correct identification of hyperendemic communities and of most of the hypoendemic ones. Frequent misclassifications occurred for communities with intermediate prevalence and the cheapest plans had very low specificity. The study confirmed that LQAS would be a valuable tool for large scale screening in a country with scarce financial and staff resources. Involving teachers, appeared to be quite feasible and should not lower the reliability of surveys. We recommend that the national schistosomiasis control programme systematically uses LQAS for identification of communities, provided that sample sizes are adapted to the specific epidemiological patterns of S. mansoni infection in the main regions.
Haplotype-Based Genotyping in Polyploids.
Clevenger, Josh P; Korani, Walid; Ozias-Akins, Peggy; Jackson, Scott
2018-01-01
Accurate identification of polymorphisms from sequence data is crucial to unlocking the potential of high throughput sequencing for genomics. Single nucleotide polymorphisms (SNPs) are difficult to accurately identify in polyploid crops due to the duplicative nature of polyploid genomes leading to low confidence in the true alignment of short reads. Implementing a haplotype-based method in contrasting subgenome-specific sequences leads to higher accuracy of SNP identification in polyploids. To test this method, a large-scale 48K SNP array (Axiom Arachis2) was developed for Arachis hypogaea (peanut), an allotetraploid, in which 1,674 haplotype-based SNPs were included. Results of the array show that 74% of the haplotype-based SNP markers could be validated, which is considerably higher than previous methods used for peanut. The haplotype method has been implemented in a standalone program, HAPLOSWEEP, which takes as input bam files and a vcf file and identifies haplotype-based markers. Haplotype discovery can be made within single reads or span paired reads, and can leverage long read technology by targeting any length of haplotype. Haplotype-based genotyping is applicable in all allopolyploid genomes and provides confidence in marker identification and in silico-based genotyping for polyploid genomics.
Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.
da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez
2015-01-01
The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.
The scale dependence of optical diversity in a prairie ecosystem
NASA Astrophysics Data System (ADS)
Gamon, J. A.; Wang, R.; Stilwell, A.; Zygielbaum, A. I.; Cavender-Bares, J.; Townsend, P. A.
2015-12-01
Biodiversity loss, one of the most crucial challenges of our time, endangers ecosystem services that maintain human wellbeing. Traditional methods of measuring biodiversity require extensive and costly field sampling by biologists with extensive experience in species identification. Remote sensing can be used for such assessment based upon patterns of optical variation. This provides efficient and cost-effective means to determine ecosystem diversity at different scales and over large areas. Sampling scale has been described as a "fundamental conceptual problem" in ecology, and is an important practical consideration in both remote sensing and traditional biodiversity studies. On the one hand, with decreasing spatial and spectral resolution, the differences among different optical types may become weak or even disappear. Alternately, high spatial and/or spectral resolution may introduce redundant or contradictory information. For example, at high resolution, the variation within optical types (e.g., between leaves on a single plant canopy) may add complexity unrelated to specie richness. We studied the scale-dependence of optical diversity in a prairie ecosystem at Cedar Creek Ecosystem Science Reserve, Minnesota, USA using a variety of spectrometers from several platforms on the ground and in the air. Using the coefficient of variation (CV) of spectra as an indicator of optical diversity, we found that high richness plots generally have a higher coefficient of variation. High resolution imaging spectrometer data (1 mm pixels) showed the highest sensitivity to richness level. With decreasing spatial resolution, the difference in CV between richness levels decreased, but remained significant. These findings can be used to guide airborne studies of biodiversity and develop more effective large-scale biodiversity sampling methods.
NASA Astrophysics Data System (ADS)
Schneider, Johannes M.; Turowski, Jens M.; Rickenmann, Dieter; Hegglin, Ramon; Arrigo, Sabrina; Mao, Luca; Kirchner, James W.
2014-03-01
Bed load transport during storm events is both an agent of geomorphic change and a significant natural hazard in mountain regions. Thus, predicting bed load transport is a central challenge in fluvial geomorphology and natural hazard risk assessment. Bed load transport during storm events depends on the width and depth of bed scour, as well as the transport distances of individual sediment grains. We traced individual gravels in two steep mountain streams, the Erlenbach (Switzerland) and Rio Cordon (Italy), using magnetic and radio frequency identification tags, and measured their bed load transport rates using calibrated geophone bed load sensors in the Erlenbach and a bed load trap in the Rio Cordon. Tracer transport distances and bed load volumes exhibited approximate power law scaling with both the peak stream power and the cumulative stream energy of individual hydrologic events. Bed load volumes scaled much more steeply with peak stream power and cumulative stream energy than tracer transport distances did, and bed load volumes scaled as roughly the third power of transport distances. These observations imply that large bed load transport events become large primarily by scouring the bed deeper and wider, and only secondarily by transporting the mobilized sediment farther. Using the sediment continuity equation, we can estimate the mean effective thickness of the actively transported layer, averaged over the entire channel width and the duration of individual flow events. This active layer thickness also followed approximate power law scaling with peak stream power and cumulative stream energy and ranged up to 0.57 m in the Erlenbach, broadly consistent with independent measurements.
NASA Astrophysics Data System (ADS)
Yang, J.; Weisberg, P.; Dilts, T.
2016-12-01
Climate warming can lead to large-scale drought-induced tree mortality events and greatly affect forest landscape resilience. Climatic water deficit (CWD) and its physiographic variations provide a key mechanism in driving landscape dynamics in response to climate change. Although CWD has been successfully applied in niche-based species distribution models, its application in process-based forest landscape models is still scarce. Here we present a framework incorporating fine-scale influence of terrain on ecohydrology in modeling forest landscape dynamics. We integrated CWD with a forest landscape succession and disturbance model (LANDIS-II) to evaluate how tree species distribution might shift in response to different climate-fire scenarios across an elevation-aspect gradient in a semi-arid montane landscape of northeastern Nevada, USA. Our simulations indicated that drought-intolerant tree species such as quaking aspen could experience greatly reduced distributions in the more arid portions of their existing ranges due to water stress limitations under future climate warming scenarios. However, even at the most xeric portions of its range, aspen is likely to persist in certain environmental settings due to unique and often fine-scale combinations of resource availability, species interactions and disturbance regime. The modeling approach presented here allowed identification of these refugia. In addition, this approach helped quantify how the direction and magnitude of fire influences on species distribution would vary across topoclimatic gradients, as well as furthers our understanding on the role of environmental conditions, fire, and inter-specific competition in shaping potential responses of landscape resilience to climate change.
Nikolskiy, Igor; Siuzdak, Gary; Patti, Gary J
2015-06-15
The goal of large-scale metabolite profiling is to compare the relative concentrations of as many metabolites extracted from biological samples as possible. This is typically accomplished by measuring the abundances of thousands of ions with high-resolution and high mass accuracy mass spectrometers. Although the data from these instruments provide a comprehensive fingerprint of each sample, identifying the structures of the thousands of detected ions is still challenging and time intensive. An alternative, less-comprehensive approach is to use triple quadrupole (QqQ) mass spectrometry to analyze predetermined sets of metabolites (typically fewer than several hundred). This is done using authentic standards to develop QqQ experiments that specifically detect only the targeted metabolites, with the advantage that the need for ion identification after profiling is eliminated. Here, we propose a framework to extend the application of QqQ mass spectrometers to large-scale metabolite profiling. We aim to provide a foundation for designing QqQ multiple reaction monitoring (MRM) experiments for each of the 82 696 metabolites in the METLIN metabolite database. First, we identify common fragmentation products from the experimental fragmentation data in METLIN. Then, we model the likelihoods of each precursor structure in METLIN producing each common fragmentation product. With these likelihood estimates, we select ensembles of common fragmentation products that minimize our uncertainty about metabolite identities. We demonstrate encouraging performance and, based on our results, we suggest how our method can be integrated with future work to develop large-scale MRM experiments. Our predictions, Supplementary results, and the code for estimating likelihoods and selecting ensembles of fragmentation reactions are made available on the lab website at http://pattilab.wustl.edu/FragPred. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Webborn, Nick; Williams, Alun; McNamee, Mike; Bouchard, Claude; Pitsiladis, Yannis; Ahmetov, Ildus; Ashley, Euan; Byrne, Nuala; Camporesi, Silvia; Collins, Malcolm; Dijkstra, Paul; Eynon, Nir; Fuku, Noriyuki; Garton, Fleur C; Hoppe, Nils; Holm, Søren; Kaye, Jane; Klissouras, Vassilis; Lucia, Alejandro; Maase, Kamiel; Moran, Colin; North, Kathryn N; Pigozzi, Fabio; Wang, Guan
2015-01-01
The general consensus among sport and exercise genetics researchers is that genetic tests have no role to play in talent identification or the individualised prescription of training to maximise performance. Despite the lack of evidence, recent years have witnessed the rise of an emerging market of direct-to-consumer marketing (DTC) tests that claim to be able to identify children's athletic talents. Targeted consumers include mainly coaches and parents. There is concern among the scientific community that the current level of knowledge is being misrepresented for commercial purposes. There remains a lack of universally accepted guidelines and legislation for DTC testing in relation to all forms of genetic testing and not just for talent identification. There is concern over the lack of clarity of information over which specific genes or variants are being tested and the almost universal lack of appropriate genetic counselling for the interpretation of the genetic data to consumers. Furthermore independent studies have identified issues relating to quality control by DTC laboratories with different results being reported from samples from the same individual. Consequently, in the current state of knowledge, no child or young athlete should be exposed to DTC genetic testing to define or alter training or for talent identification aimed at selecting gifted children or adolescents. Large scale collaborative projects, may help to develop a stronger scientific foundation on these issues in the future. PMID:26582191
NASA Astrophysics Data System (ADS)
Huang, W. J.; Hsu, C. H.; Chang, L. C.; Chiang, C. J.; Wang, Y. S.; Lu, W. C.
2017-12-01
Hydrogeological framework is the most important basis for groundwater analysis and simulation. Conventionally, the core drill is a most commonly adopted skill to acquire the core's data with the help of other research methods to artificially determine the result. Now, with the established groundwater station network, there are a lot of groundwater level information available. Groundwater level is an integrated presentation of the hydrogeological framework and the external pumping and recharge system. Therefore, how to identify the hydrogeological framework from a large number of groundwater level data is an important subject. In this study, the frequency analysis method and rainfall recharge mechanism were used to identify the aquifer where the groundwater level's response frequency and amplitude react to the earth tide. As the earth tide change originates from the gravity caused by the paths of sun and moon, it leads to soil stress and strain changes, which further affects the groundwater level. The scale of groundwater level's change varies with the influence of aquifer pressure systems such as confined or unconfined aquifers. This method has been applied to the identification of aquifers in the Cho-Shui River Alluvial Fan. The results of the identification are compared to the records of core drill and they both are quite consistent. It is shown that the identification methods developed in this study can considerably contribute to the identification of hydrogeological framework.
Identification of Amazonian trees with DNA barcodes.
Gonzalez, Mailyn Adriana; Baraloto, Christopher; Engel, Julien; Mori, Scott A; Pétronelli, Pascal; Riéra, Bernard; Roger, Aurélien; Thébaud, Christophe; Chave, Jérôme
2009-10-16
Large-scale plant diversity inventories are critical to develop informed conservation strategies. However, the workload required for classic taxonomic surveys remains high and is particularly problematic for megadiverse tropical forests. Based on a comprehensive census of all trees in two hectares of a tropical forest in French Guiana, we examined whether plant DNA barcoding could contribute to increasing the quality and the pace of tropical plant biodiversity surveys. Of the eight plant DNA markers we tested (rbcLa, rpoC1, rpoB, matK, ycf5, trnL, psbA-trnH, ITS), matK and ITS had a low rate of sequencing success. More critically, none of the plastid markers achieved a rate of correct plant identification greater than 70%, either alone or combined. The performance of all barcoding markers was noticeably low in few species-rich clades, such as the Laureae, and the Sapotaceae. A field test of the approach enabled us to detect 130 molecular operational taxonomic units in a sample of 252 juvenile trees. Including molecular markers increased the identification rate of juveniles from 72% (morphology alone) to 96% (morphology and molecular) of the individuals assigned to a known tree taxon. We conclude that while DNA barcoding is an invaluable tool for detecting errors in identifications and for identifying plants at juvenile stages, its limited ability to identify collections will constrain the practical implementation of DNA-based tropical plant biodiversity programs.
RGAugury: a pipeline for genome-wide prediction of resistance gene analogs (RGAs) in plants.
Li, Pingchuan; Quan, Xiande; Jia, Gaofeng; Xiao, Jin; Cloutier, Sylvie; You, Frank M
2016-11-02
Resistance gene analogs (RGAs), such as NBS-encoding proteins, receptor-like protein kinases (RLKs) and receptor-like proteins (RLPs), are potential R-genes that contain specific conserved domains and motifs. Thus, RGAs can be predicted based on their conserved structural features using bioinformatics tools. Computer programs have been developed for the identification of individual domains and motifs from the protein sequences of RGAs but none offer a systematic assessment of the different types of RGAs. A user-friendly and efficient pipeline is needed for large-scale genome-wide RGA predictions of the growing number of sequenced plant genomes. An integrative pipeline, named RGAugury, was developed to automate RGA prediction. The pipeline first identifies RGA-related protein domains and motifs, namely nucleotide binding site (NB-ARC), leucine rich repeat (LRR), transmembrane (TM), serine/threonine and tyrosine kinase (STTK), lysin motif (LysM), coiled-coil (CC) and Toll/Interleukin-1 receptor (TIR). RGA candidates are identified and classified into four major families based on the presence of combinations of these RGA domains and motifs: NBS-encoding, TM-CC, and membrane associated RLP and RLK. All time-consuming analyses of the pipeline are paralleled to improve performance. The pipeline was evaluated using the well-annotated Arabidopsis genome. A total of 98.5, 85.2, and 100 % of the reported NBS-encoding genes, membrane associated RLPs and RLKs were validated, respectively. The pipeline was also successfully applied to predict RGAs for 50 sequenced plant genomes. A user-friendly web interface was implemented to ease command line operations, facilitate visualization and simplify result management for multiple datasets. RGAugury is an efficiently integrative bioinformatics tool for large scale genome-wide identification of RGAs. It is freely available at Bitbucket: https://bitbucket.org/yaanlpc/rgaugury .
Single-molecule optical genome mapping of a human HapMap and a colorectal cancer cell line.
Teo, Audrey S M; Verzotto, Davide; Yao, Fei; Nagarajan, Niranjan; Hillmer, Axel M
2015-01-01
Next-generation sequencing (NGS) technologies have changed our understanding of the variability of the human genome. However, the identification of genome structural variations based on NGS approaches with read lengths of 35-300 bases remains a challenge. Single-molecule optical mapping technologies allow the analysis of DNA molecules of up to 2 Mb and as such are suitable for the identification of large-scale genome structural variations, and for de novo genome assemblies when combined with short-read NGS data. Here we present optical mapping data for two human genomes: the HapMap cell line GM12878 and the colorectal cancer cell line HCT116. High molecular weight DNA was obtained by embedding GM12878 and HCT116 cells, respectively, in agarose plugs, followed by DNA extraction under mild conditions. Genomic DNA was digested with KpnI and 310,000 and 296,000 DNA molecules (≥ 150 kb and 10 restriction fragments), respectively, were analyzed per cell line using the Argus optical mapping system. Maps were aligned to the human reference by OPTIMA, a new glocal alignment method. Genome coverage of 6.8× and 5.7× was obtained, respectively; 2.9× and 1.7× more than the coverage obtained with previously available software. Optical mapping allows the resolution of large-scale structural variations of the genome, and the scaffold extension of NGS-based de novo assemblies. OPTIMA is an efficient new alignment method; our optical mapping data provide a resource for genome structure analyses of the human HapMap reference cell line GM12878, and the colorectal cancer cell line HCT116.
Detection of Invasive Mosquito Vectors Using Environmental DNA (eDNA) from Water Samples
Schneider, Judith; Valentini, Alice; Dejean, Tony; Montarsi, Fabrizio; Taberlet, Pierre
2016-01-01
Repeated introductions and spread of invasive mosquito species (IMS) have been recorded on a large scale these last decades worldwide. In this context, members of the mosquito genus Aedes can present serious risks to public health as they have or may develop vector competence for various viral diseases. While the Tiger mosquito (Aedes albopictus) is a well-known vector for e.g. dengue and chikungunya viruses, the Asian bush mosquito (Ae. j. japonicus) and Ae. koreicus have shown vector competence in the field and the laboratory for a number of viruses including dengue, West Nile fever and Japanese encephalitis. Early detection and identification is therefore crucial for successful eradication or control strategies. Traditional specific identification and monitoring of different and/or cryptic life stages of the invasive Aedes species based on morphological grounds may lead to misidentifications, and are problematic when extensive surveillance is needed. In this study, we developed, tested and applied an environmental DNA (eDNA) approach for the detection of three IMS, based on water samples collected in the field in several European countries. We compared real-time quantitative PCR (qPCR) assays specific for these three species and an eDNA metabarcoding approach with traditional sampling, and discussed the advantages and limitations of these methods. Detection probabilities for eDNA-based approaches were in most of the specific comparisons higher than for traditional survey and the results were congruent between both molecular methods, confirming the reliability and efficiency of alternative eDNA-based techniques for the early and unambiguous detection and surveillance of invasive mosquito vectors. The ease of water sampling procedures in the eDNA approach tested here allows the development of large-scale monitoring and surveillance programs of IMS, especially using citizen science projects. PMID:27626642
Scaglione, Davide; Lanteri, Sergio; Acquadro, Alberto; Lai, Zhao; Knapp, Steven J; Rieseberg, Loren; Portis, Ezio
2012-10-01
Cynara cardunculus (2n = 2× = 34) is a member of the Asteraceae family that contributes significantly to the agricultural economy of the Mediterranean basin. The species includes two cultivated varieties, globe artichoke and cardoon, which are grown mainly for food. Cynara cardunculus is an orphan crop species whose genome/transcriptome has been relatively unexplored, especially in comparison to other Asteraceae crops. Hence, there is a significant need to improve its genomic resources through the identification of novel genes and sequence-based markers, to design new breeding schemes aimed at increasing quality and crop productivity. We report the outcome of cDNA sequencing and assembly for eleven accessions of C. cardunculus. Sequencing of three mapping parental genotypes using Roche 454-Titanium technology generated 1.7 × 10⁶ reads, which were assembled into 38,726 reference transcripts covering 32 Mbp. Putative enzyme-encoding genes were annotated using the KEGG-database. Transcription factors and candidate resistance genes were surveyed as well. Paired-end sequencing was done for cDNA libraries of eight other representative C. cardunculus accessions on an Illumina Genome Analyzer IIx, generating 46 × 10⁶ reads. Alignment of the IGA and 454 reads to reference transcripts led to the identification of 195,400 SNPs with a Bayesian probability exceeding 95%; a validation rate of 90% was obtained by Sanger-sequencing of a subset of contigs. These results demonstrate that the integration of data from different NGS platforms enables large-scale transcriptome characterization, along with massive SNP discovery. This information will contribute to the dissection of key agricultural traits in C. cardunculus and facilitate the implementation of marker-assisted selection programs. © 2012 The Authors. Plant Biotechnology Journal © 2012 Society for Experimental Biology, Association of Applied Biologists and Blackwell Publishing Ltd.
Ensuring quality in studies linking cancer registries and biobanks.
Langseth, Hilde; Luostarinen, Tapio; Bray, Freddie; Dillner, Joakim
2010-04-01
The Nordic countries have a long tradition of providing comparable and high quality cancer data through the national population-based cancer registries and the capability to link the diverse large-scale biobanks currently in operation. The joining of these two infrastructural resources can provide a study base for large-scale studies of etiology, treatment and early detection of cancer. Research projects based on combined data from cancer registries and biobanks provides great opportunities, but also presents major challenges. Biorepositories have become an important resource in molecular epidemiology, and the increased interest in performing etiological, clinical and gene-environment-interaction studies, involving information from biological samples linked to population-based cancer registries, warrants a joint evaluation of the quality aspects of the two resources, as well as an assessment of whether the resources can be successfully combined into a high quality study. While the quality of biospecimen handling and analysis is commonly considered in different studies, the logistics of data handling including the linkage of the biobank with the cancer registry is an overlooked aspect of a biobank-based study. It is thus the aim of this paper to describe recommendations on data handling, in particular the linkage of biobank material to cancer registry data and the quality aspects thereof, based on the experience of Nordic collaborative projects combining data from cancer registries and biobanks. We propose a standard documentation with respect to the following topics: the quality control aspects of cancer registration, the identification of cases and controls, the identification and use of data confounders, the stability of serum components, historical storage conditions, aliquoting history, the number of freeze/thaw cycles and available volumes.
Epithelial ovarian cancer: the molecular genetics of epithelial ovarian cancer.
Krzystyniak, J; Ceppi, L; Dizon, D S; Birrer, M J
2016-04-01
Epithelial ovarian cancer (EOC) remains one of the leading causes of cancer-related deaths among women worldwide, despite gains in diagnostics and treatments made over the last three decades. Existing markers of ovarian cancer possess very limited clinical relevance highlighting the emerging need for identification of novel prognostic biomarkers as well as better predictive factors that might allow the stratification of patients who could benefit from a more targeted approach. A summary of molecular genetics of EOC. Large-scale high-throughput genomic technologies appear to be powerful tools for investigations into the genetic abnormalities in ovarian tumors, including studies on dysregulated genes and aberrantly activated signaling pathways. Such technologies can complement well-established clinical histopathology analysis and tumor grading and will hope to result in better, more tailored treatments in the future. Genomic signatures obtained by gene expression profiling of EOC may be able to predict survival outcomes and other important clinical outcomes, such as the success of surgical treatment. Finally, genomic analyses may allow for the identification of novel predictive biomarkers for purposes of treatment planning. These data combined suggest a pathway to progress in the treatment of advanced ovarian cancer and the promise of fulfilling the objective of providing personalized medicine to women with ovarian cancer. The understanding of basic molecular events in the tumorigenesis and chemoresistance of EOC together with discovery of potential biomarkers may be greatly enhanced through large-scale genomic studies. In order to maximize the impact of these technologies, however, extensive validation studies are required. © The Author 2016. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Yang, Y.; Wang, J.; Gong, S.; Zhang, X.; Wang, H.; Wang, Y.; Wang, J.; Li, D.; Guo, J.
2015-03-01
Using surface meteorological observation and high resolution emission data, this paper discusses the application of PLAM/h Index (Parameter Linking Air-quality to Meteorological conditions/haze) in the prediction of large-scale low visibility and fog-haze events. Based on the two-dimensional probability density function diagnosis model for emissions, the study extends the diagnosis and prediction of the meteorological pollution index PLAM to the regional visibility fog-haze intensity. The results show that combining the influence of regular meteorological conditions and emission factors together in the PLAM/h parameterization scheme is very effective in improving the diagnostic identification ability of the fog-haze weather in North China. The correlation coefficients for four seasons (spring, summer, autumn and winter) between PLAM/h and visibility observation are 0.76, 0.80, 0.96 and 0.86 respectively and all their significance levels exceed 0.001, showing the ability of PLAM/h to predict the seasonal changes and differences of fog-haze weather in the North China region. The high-value correlation zones are respectively located in Jing-Jin-Ji (Beijing, Tianjin, Hebei), Bohai Bay rim and the southern Hebei-northern Henan, indicating that the PLAM/h index has relations with the distribution of frequent heavy fog-haze weather in North China and the distribution of emission high-value zone. Comparatively analyzing the heavy fog-haze events and large-scale fine weather processes in winter and summer, it is found that PLAM/h index 24 h forecast is highly correlated to the visibility observation. Therefore, PLAM/h index has better capability of doing identification, analysis and forecasting.
NASA Astrophysics Data System (ADS)
Yang, Y. Q.; Wang, J. Z.; Gong, S. L.; Zhang, X. Y.; Wang, H.; Wang, Y. Q.; Wang, J.; Li, D.; Guo, J. P.
2016-02-01
Using surface meteorological observation and high-resolution emission data, this paper discusses the application of the PLAM/h index (Parameter Linking Air-quality to Meteorological conditions/haze) in the prediction of large-scale low visibility and fog-haze events. Based on the two-dimensional probability density function diagnosis model for emissions, the study extends the diagnosis and prediction of the meteorological pollution index PLAM to the regional visibility fog-haze intensity. The results show that combining the influence of regular meteorological conditions and emission factors together in the PLAM/h parameterization scheme is very effective in improving the diagnostic identification ability of the fog-haze weather in North China. The determination coefficients for four seasons (spring, summer, autumn, and winter) between PLAM/h and visibility observation are 0.76, 0.80, 0.96, and 0.86, respectively, and all of their significance levels exceed 0.001, showing the ability of PLAM/h to predict the seasonal changes and differences of fog-haze weather in the North China region. The high-value correlation zones are located in Jing-Jin-Ji (Beijing, Tianjin, Hebei), Bohai Bay rim, and southern Hebei-northern Henan, indicating that the PLAM/h index is related to the distribution of frequent heavy fog-haze weather in North China and the distribution of emission high-value zone. Through comparative analysis of the heavy fog-haze events and large-scale clear-weather processes in winter and summer, it is found that PLAM/h index 24 h forecast is highly correlated with the visibility observation. Therefore, the PLAM/h index has good capability in identification, analysis, and forecasting.
Detection of Invasive Mosquito Vectors Using Environmental DNA (eDNA) from Water Samples.
Schneider, Judith; Valentini, Alice; Dejean, Tony; Montarsi, Fabrizio; Taberlet, Pierre; Glaizot, Olivier; Fumagalli, Luca
2016-01-01
Repeated introductions and spread of invasive mosquito species (IMS) have been recorded on a large scale these last decades worldwide. In this context, members of the mosquito genus Aedes can present serious risks to public health as they have or may develop vector competence for various viral diseases. While the Tiger mosquito (Aedes albopictus) is a well-known vector for e.g. dengue and chikungunya viruses, the Asian bush mosquito (Ae. j. japonicus) and Ae. koreicus have shown vector competence in the field and the laboratory for a number of viruses including dengue, West Nile fever and Japanese encephalitis. Early detection and identification is therefore crucial for successful eradication or control strategies. Traditional specific identification and monitoring of different and/or cryptic life stages of the invasive Aedes species based on morphological grounds may lead to misidentifications, and are problematic when extensive surveillance is needed. In this study, we developed, tested and applied an environmental DNA (eDNA) approach for the detection of three IMS, based on water samples collected in the field in several European countries. We compared real-time quantitative PCR (qPCR) assays specific for these three species and an eDNA metabarcoding approach with traditional sampling, and discussed the advantages and limitations of these methods. Detection probabilities for eDNA-based approaches were in most of the specific comparisons higher than for traditional survey and the results were congruent between both molecular methods, confirming the reliability and efficiency of alternative eDNA-based techniques for the early and unambiguous detection and surveillance of invasive mosquito vectors. The ease of water sampling procedures in the eDNA approach tested here allows the development of large-scale monitoring and surveillance programs of IMS, especially using citizen science projects.
Identification of the underlying factor structure of the Derriford Appearance Scale 24
Lawson, Victoria; White, Paul
2015-01-01
Background. The Derriford Appearance Scale24 (DAS24) is a widely used measure of distress and dysfunction in relation to self-consciousness of appearance. It has been used in clinical and research settings, and translated into numerous European and Asian languages. Hitherto, no study has conducted an analysis to determine the underlying factor structure of the scale. Methods. A large (n = 1,265) sample of community and hospital patients with a visible difference were recruited face to face or by post, and completed the DAS24. Results. A two factor solution was generated. An evaluation of the congruence of the factor solutions on each of the the hospital and the community samples using Tucker’s Coefficient of Congruence (rc = .979) and confirmatory factor analysis, which demonstrated a consistent factor structure. A main factor, general self consciousness (GSC), was represented by 18 items. Six items comprised a second factor, sexual and body self-consciousness (SBSC). The SBSC scale demonstrated greater sensitivity and specificity in identifying distress for sexually significant areas of the body. Discussion. The factor structure of the DAS24 facilitates a more nuanced interpretation of scores using this scale. Two conceptually and statistically coherent sub-scales were identified. The SBSC sub-scale offers a means of identifying distress and dysfunction around sexually significant areas of the body not previously possible with this scale. PMID:26157633
Identification of the underlying factor structure of the Derriford Appearance Scale 24.
Moss, Timothy P; Lawson, Victoria; White, Paul
2015-01-01
Background. The Derriford Appearance Scale24 (DAS24) is a widely used measure of distress and dysfunction in relation to self-consciousness of appearance. It has been used in clinical and research settings, and translated into numerous European and Asian languages. Hitherto, no study has conducted an analysis to determine the underlying factor structure of the scale. Methods. A large (n = 1,265) sample of community and hospital patients with a visible difference were recruited face to face or by post, and completed the DAS24. Results. A two factor solution was generated. An evaluation of the congruence of the factor solutions on each of the the hospital and the community samples using Tucker's Coefficient of Congruence (rc = .979) and confirmatory factor analysis, which demonstrated a consistent factor structure. A main factor, general self consciousness (GSC), was represented by 18 items. Six items comprised a second factor, sexual and body self-consciousness (SBSC). The SBSC scale demonstrated greater sensitivity and specificity in identifying distress for sexually significant areas of the body. Discussion. The factor structure of the DAS24 facilitates a more nuanced interpretation of scores using this scale. Two conceptually and statistically coherent sub-scales were identified. The SBSC sub-scale offers a means of identifying distress and dysfunction around sexually significant areas of the body not previously possible with this scale.
de Thoisy, Benoit; Fayad, Ibrahim; Clément, Luc; Barrioz, Sébastien; Poirier, Eddy; Gond, Valéry
2016-01-01
Tropical forests with a low human population and absence of large-scale deforestation provide unique opportunities to study successful conservation strategies, which should be based on adequate monitoring tools. This study explored the conservation status of a large predator, the jaguar, considered an indicator of the maintenance of how well ecological processes are maintained. We implemented an original integrative approach, exploring successive ecosystem status proxies, from habitats and responses to threats of predators and their prey, to canopy structure and forest biomass. Niche modeling allowed identification of more suitable habitats, significantly related to canopy height and forest biomass. Capture/recapture methods showed that jaguar density was higher in habitats identified as more suitable by the niche model. Surveys of ungulates, large rodents and birds also showed higher density where jaguars were more abundant. Although jaguar density does not allow early detection of overall vertebrate community collapse, a decrease in the abundance of large terrestrial birds was noted as good first evidence of disturbance. The most promising tool comes from easily acquired LiDAR data and radar images: a decrease in canopy roughness was closely associated with the disturbance of forests and associated decreasing vertebrate biomass. This mixed approach, focusing on an apex predator, ecological modeling and remote-sensing information, not only helps detect early population declines in large mammals, but is also useful to discuss the relevance of large predators as indicators and the efficiency of conservation measures. It can also be easily extrapolated and adapted in a timely manner, since important open-source data are increasingly available and relevant for large-scale and real-time monitoring of biodiversity.
de Thoisy, Benoit; Fayad, Ibrahim; Clément, Luc; Barrioz, Sébastien; Poirier, Eddy; Gond, Valéry
2016-01-01
Tropical forests with a low human population and absence of large-scale deforestation provide unique opportunities to study successful conservation strategies, which should be based on adequate monitoring tools. This study explored the conservation status of a large predator, the jaguar, considered an indicator of the maintenance of how well ecological processes are maintained. We implemented an original integrative approach, exploring successive ecosystem status proxies, from habitats and responses to threats of predators and their prey, to canopy structure and forest biomass. Niche modeling allowed identification of more suitable habitats, significantly related to canopy height and forest biomass. Capture/recapture methods showed that jaguar density was higher in habitats identified as more suitable by the niche model. Surveys of ungulates, large rodents and birds also showed higher density where jaguars were more abundant. Although jaguar density does not allow early detection of overall vertebrate community collapse, a decrease in the abundance of large terrestrial birds was noted as good first evidence of disturbance. The most promising tool comes from easily acquired LiDAR data and radar images: a decrease in canopy roughness was closely associated with the disturbance of forests and associated decreasing vertebrate biomass. This mixed approach, focusing on an apex predator, ecological modeling and remote-sensing information, not only helps detect early population declines in large mammals, but is also useful to discuss the relevance of large predators as indicators and the efficiency of conservation measures. It can also be easily extrapolated and adapted in a timely manner, since important open-source data are increasingly available and relevant for large-scale and real-time monitoring of biodiversity. PMID:27828993
Electron and photon identification in the D0 experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, V. M.; Abbott, B.; Acharya, B. S.
2014-06-01
The electron and photon reconstruction and identification algorithms used by the D0 Collaboration at the Fermilab Tevatron collider are described. The determination of the electron energy scale and resolution is presented. Studies of the performance of the electron and photon reconstruction and identification are summarized.
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
Functional metagenomics to decipher food-microbe-host crosstalk.
Larraufie, Pierre; de Wouters, Tomas; Potocki-Veronese, Gabrielle; Blottière, Hervé M; Doré, Joël
2015-02-01
The recent developments of metagenomics permit an extremely high-resolution molecular scan of the intestinal microbiota giving new insights and opening perspectives for clinical applications. Beyond the unprecedented vision of the intestinal microbiota given by large-scale quantitative metagenomics studies, such as the EU MetaHIT project, functional metagenomics tools allow the exploration of fine interactions between food constituents, microbiota and host, leading to the identification of signals and intimate mechanisms of crosstalk, especially between bacteria and human cells. Cloning of large genome fragments, either from complex intestinal communities or from selected bacteria, allows the screening of these biological resources for bioactivity towards complex plant polymers or functional food such as prebiotics. This permitted identification of novel carbohydrate-active enzyme families involved in dietary fibre and host glycan breakdown, and highlighted unsuspected bacterial players at the top of the intestinal microbial food chain. Similarly, exposure of fractions from genomic and metagenomic clones onto human cells engineered with reporter systems to track modulation of immune response, cell proliferation or cell metabolism has allowed the identification of bioactive clones modulating key cell signalling pathways or the induction of specific genes. This opens the possibility to decipher mechanisms by which commensal bacteria or candidate probiotics can modulate the activity of cells in the intestinal epithelium or even in distal organs such as the liver, adipose tissue or the brain. Hence, in spite of our inability to culture many of the dominant microbes of the human intestine, functional metagenomics open a new window for the exploration of food-microbe-host crosstalk.
Systematic effects of foreground removal in 21-cm surveys of reionization
NASA Astrophysics Data System (ADS)
Petrovic, Nada; Oh, S. Peng
2011-05-01
21-cm observations have the potential to revolutionize our understanding of the high-redshift Universe. Whilst extremely bright radio continuum foregrounds exist at these frequencies, their spectral smoothness can be exploited to allow efficient foreground subtraction. It is well known that - regardless of other instrumental effects - this removes power on scales comparable to the survey bandwidth. We investigate associated systematic biases. We show that removing line-of-sight fluctuations on large scales aliases into suppression of the 3D power spectrum across a broad range of scales. This bias can be dealt with by correctly marginalizing over small wavenumbers in the 1D power spectrum; however, the unbiased estimator will have unavoidably larger variance. We also show that Gaussian realizations of the power spectrum permit accurate and extremely rapid Monte Carlo simulations for error analysis; repeated realizations of the fully non-Gaussian field are unnecessary. We perform Monte Carlo maximum likelihood simulations of foreground removal which yield unbiased, minimum variance estimates of the power spectrum in agreement with Fisher matrix estimates. Foreground removal also distorts the 21-cm probability distribution function (PDF), reducing the contrast between neutral and ionized regions, with potentially serious consequences for efforts to extract information from the PDF. We show that it is the subtraction of large-scale modes which is responsible for this distortion, and that it is less severe in the earlier stages of reionization. It can be reduced by using larger bandwidths. In the late stages of reionization, identification of the largest ionized regions (which consist of foreground emission only) provides calibration points which potentially allow recovery of large-scale modes. Finally, we also show that (i) the broad frequency response of synchrotron and free-free emission will smear out any features in the electron momentum distribution and ensure spectrally smooth foregrounds and (ii) extragalactic radio recombination lines should be negligible foregrounds.
Bioremediation at a global scale: from the test tube to planet Earth.
de Lorenzo, Víctor; Marlière, Philippe; Solé, Ricard
2016-09-01
Planet Earth's biosphere has evolved over billions of years as a balanced bio-geological system ultimately sustained by sunpower and the large-scale cycling of elements largely run by the global environmental microbiome. Humans have been part of this picture for much of their existence. But the industrial revolution started in the XIX century and the subsequent advances in medicine, chemistry, agriculture and communications have impacted such balances to an unprecedented degree - and the problem has nothing but exacerbated in the last 20 years. Human overpopulation, industrial growth along with unsustainable use of natural resources have driven many sites and perhaps the planetary ecosystem as a whole, beyond recovery by spontaneous natural means, even if the immediate causes could be stopped. The most conspicuous indications of such a state of affairs include the massive change in land use, the accelerated increase in the levels of greenhouse gases, the frequent natural disasters associated to climate change and the growing non-recyclable waste (e.g. plastics and recalcitrant chemicals) that we release to the Environment. While the whole planet is afflicted at a global scale by chemical pollution and anthropogenic emissions, the ongoing development of systems and synthetic biology, metagenomics, modern chemistry and some key concepts from ecological theory allow us to tackle this phenomenal challenge and propose large-scale interventions aimed at reversing and even improving the situation. This involves (i) identification of key reactions or processes that need to be re-established (or altogether created) for ecosystem reinstallation, (ii) implementation of such reactions in natural or designer hosts able to self-replicate and deliver the corresponding activities when/where needed in a fashion guided by sound ecological modelling, (iii) dispersal of niche-creating agents at a global scale and (iv) containment, monitoring and risk assessment of the whole process. © 2016 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David
2013-06-01
We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.
Large-area photogrammetry based testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul
2017-03-01
An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.
Integrative analysis of the Caenorhabditis elegans genome by the modENCODE project.
Gerstein, Mark B; Lu, Zhi John; Van Nostrand, Eric L; Cheng, Chao; Arshinoff, Bradley I; Liu, Tao; Yip, Kevin Y; Robilotto, Rebecca; Rechtsteiner, Andreas; Ikegami, Kohta; Alves, Pedro; Chateigner, Aurelien; Perry, Marc; Morris, Mitzi; Auerbach, Raymond K; Feng, Xin; Leng, Jing; Vielle, Anne; Niu, Wei; Rhrissorrakrai, Kahn; Agarwal, Ashish; Alexander, Roger P; Barber, Galt; Brdlik, Cathleen M; Brennan, Jennifer; Brouillet, Jeremy Jean; Carr, Adrian; Cheung, Ming-Sin; Clawson, Hiram; Contrino, Sergio; Dannenberg, Luke O; Dernburg, Abby F; Desai, Arshad; Dick, Lindsay; Dosé, Andréa C; Du, Jiang; Egelhofer, Thea; Ercan, Sevinc; Euskirchen, Ghia; Ewing, Brent; Feingold, Elise A; Gassmann, Reto; Good, Peter J; Green, Phil; Gullier, Francois; Gutwein, Michelle; Guyer, Mark S; Habegger, Lukas; Han, Ting; Henikoff, Jorja G; Henz, Stefan R; Hinrichs, Angie; Holster, Heather; Hyman, Tony; Iniguez, A Leo; Janette, Judith; Jensen, Morten; Kato, Masaomi; Kent, W James; Kephart, Ellen; Khivansara, Vishal; Khurana, Ekta; Kim, John K; Kolasinska-Zwierz, Paulina; Lai, Eric C; Latorre, Isabel; Leahey, Amber; Lewis, Suzanna; Lloyd, Paul; Lochovsky, Lucas; Lowdon, Rebecca F; Lubling, Yaniv; Lyne, Rachel; MacCoss, Michael; Mackowiak, Sebastian D; Mangone, Marco; McKay, Sheldon; Mecenas, Desirea; Merrihew, Gennifer; Miller, David M; Muroyama, Andrew; Murray, John I; Ooi, Siew-Loon; Pham, Hoang; Phippen, Taryn; Preston, Elicia A; Rajewsky, Nikolaus; Rätsch, Gunnar; Rosenbaum, Heidi; Rozowsky, Joel; Rutherford, Kim; Ruzanov, Peter; Sarov, Mihail; Sasidharan, Rajkumar; Sboner, Andrea; Scheid, Paul; Segal, Eran; Shin, Hyunjin; Shou, Chong; Slack, Frank J; Slightam, Cindie; Smith, Richard; Spencer, William C; Stinson, E O; Taing, Scott; Takasaki, Teruaki; Vafeados, Dionne; Voronina, Ksenia; Wang, Guilin; Washington, Nicole L; Whittle, Christina M; Wu, Beijing; Yan, Koon-Kiu; Zeller, Georg; Zha, Zheng; Zhong, Mei; Zhou, Xingliang; Ahringer, Julie; Strome, Susan; Gunsalus, Kristin C; Micklem, Gos; Liu, X Shirley; Reinke, Valerie; Kim, Stuart K; Hillier, LaDeana W; Henikoff, Steven; Piano, Fabio; Snyder, Michael; Stein, Lincoln; Lieb, Jason D; Waterston, Robert H
2010-12-24
We systematically generated large-scale data sets to improve genome annotation for the nematode Caenorhabditis elegans, a key model organism. These data sets include transcriptome profiling across a developmental time course, genome-wide identification of transcription factor-binding sites, and maps of chromatin organization. From this, we created more complete and accurate gene models, including alternative splice forms and candidate noncoding RNAs. We constructed hierarchical networks of transcription factor-binding and microRNA interactions and discovered chromosomal locations bound by an unusually large number of transcription factors. Different patterns of chromatin composition and histone modification were revealed between chromosome arms and centers, with similarly prominent differences between autosomes and the X chromosome. Integrating data types, we built statistical models relating chromatin, transcription factor binding, and gene expression. Overall, our analyses ascribed putative functions to most of the conserved genome.
Quantitative phosphoproteomic analysis of early seed development in rice (Oryza sativa L.).
Qiu, Jiehua; Hou, Yuxuan; Tong, Xiaohong; Wang, Yifeng; Lin, Haiyan; Liu, Qing; Zhang, Wen; Li, Zhiyong; Nallamilli, Babi R; Zhang, Jian
2016-02-01
Rice (Oryza sativa L.) seed serves as a major food source for over half of the global population. Though it has been long recognized that phosphorylation plays an essential role in rice seed development, the phosphorylation events and dynamics in this process remain largely unknown so far. Here, we report the first large scale identification of rice seed phosphoproteins and phosphosites by using a quantitative phosphoproteomic approach. Thorough proteomic studies in pistils and seeds at 3, 7 days after pollination resulted in the successful identification of 3885, 4313 and 4135 phosphopeptides respectively. A total of 2487 proteins were differentially phosphorylated among the three stages, including Kip related protein 1, Rice basic leucine zipper factor 1, Rice prolamin box binding factor and numerous other master regulators of rice seed development. Moreover, differentially phosphorylated proteins may be extensively involved in the biosynthesis and signaling pathways of phytohormones such as auxin, gibberellin, abscisic acid and brassinosteroid. Our results strongly indicated that protein phosphorylation is a key mechanism regulating cell proliferation and enlargement, phytohormone biosynthesis and signaling, grain filling and grain quality during rice seed development. Overall, the current study enhanced our understanding of the rice phosphoproteome and shed novel insight into the regulatory mechanism of rice seed development.
Neurobehavioral Mutants Identified in an ENU Mutagenesis Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Melloni N.; Dunning, Jonathan P; Wiley, Ronald G
2007-01-01
We report on a behavioral screening test battery that successfully identified several neurobehavioral mutants among a large-scale ENU-mutagenized mouse population. Large numbers of ENU mutagenized mice were screened for abnormalities in central nervous system function based on abnormal performance in a series of behavior tasks. We developed and employed a high-throughput screen of behavioral tasks to detect behavioral outliers. Twelve mutant pedigrees, representing a broad range of behavioral phenotypes, have been identified. Specifically, we have identified two open field mutants (one displaying hyper-locomotion, the other hypo-locomotion), four tail suspension mutants (all displaying increased immobility), one nociception mutant (displaying abnormal responsivenessmore » to thermal pain), two prepulse inhibition mutants (displaying poor inhibition of the startle response), one anxiety-related mutant (displaying decreased anxiety in the light/dark test), and one learning and memory mutant (displaying reduced response to the conditioned stimulus) These findings highlight the utility of a set of behavioral tasks used in a high throughput screen to identify neurobehavioral mutants. Further analysis (i.e., behavioral and genetic mapping studies) of mutants is in progress with the ultimate goal of identification of novel genes and mouse models relevant to human disorders as well as the identification of novel therapeutic targets.« less
Preserving and vouchering butterflies and moths for large-scale museum-based molecular research
Epstein, Samantha W.; Mitter, Kim; Hamilton, Chris A.; Plotkin, David; Mitter, Charles
2016-01-01
Butterflies and moths (Lepidoptera) comprise significant portions of the world’s natural history collections, but a standardized tissue preservation protocol for molecular research is largely lacking. Lepidoptera have traditionally been spread on mounting boards to display wing patterns and colors, which are often important for species identification. Many molecular phylogenetic studies have used legs from pinned specimens as the primary source for DNA in order to preserve a morphological voucher, but the amount of available tissue is often limited. Preserving an entire specimen in a cryogenic freezer is ideal for DNA preservation, but without an easily accessible voucher it can make specimen identification, verification, and morphological work difficult. Here we present a procedure that creates accessible and easily visualized “wing vouchers” of individual Lepidoptera specimens, and preserves the remainder of the insect in a cryogenic freezer for molecular research. Wings are preserved in protective holders so that both dorsal and ventral patterns and colors can be easily viewed without further damage. Our wing vouchering system has been implemented at the University of Maryland (AToL Lep Collection) and the University of Florida (Florida Museum of Natural History, McGuire Center of Lepidoptera and Biodiversity), which are among two of the largest Lepidoptera molecular collections in the world. PMID:27366654
Personality Characteristics of Undergraduates with Career Interests in Forensic Identification
ERIC Educational Resources Information Center
Roberti, Jonathan W.
2004-01-01
The author assessed personality scores for 47 undergraduates enrolled in a forensic identification program. Results revealed no difference between men and women enrolled in the Forensic Identification Program on subscales of the Sensation Seeking Scale (SSS-V), with the exception of Experience Seeking. Participants had lower Disinhibition scores…
Huang, Xiao-cui; Ci, Xiu-qin; Conran, John G; Li, Jie
2015-01-01
Within a regional floristic context, DNA barcoding is more useful to manage plant diversity inventories on a large scale and develop valuable conservation strategies. However, there are no DNA barcode studies from tropical areas of China, which represents one of the biodiversity hotspots around the world. A DNA barcoding database of an Asian tropical trees with high diversity was established at Xishuangbanna Nature Reserve, Yunnan, southwest China using rbcL and matK as standard barcodes, as well as trnH-psbA and ITS as supplementary barcodes. The performance of tree species identification success was assessed using 2,052 accessions from four plots belonging to two vegetation types in the region by three methods: Neighbor-Joining, Maximum-Likelihood and BLAST. We corrected morphological field identification errors (9.6%) for the three plots using rbcL and matK based on Neighbor-Joining tree. The best barcode region for PCR and sequencing was rbcL (97.6%, 90.8%), followed by trnH-psbA (93.6%, 85.6%), while matK and ITS obtained relative low PCR and sequencing success rates. However, ITS performed best for both species (44.6-58.1%) and genus (72.8-76.2%) identification. With trnH-psbA slightly less effective for species identification. The two standard barcode rbcL and matK gave poor results for species identification (24.7-28.5% and 31.6-35.3%). Compared with other studies from comparable tropical forests (e.g. Cameroon, the Amazon and India), the overall performance of the four barcodes for species identification was lower for the Xishuangbanna Nature Reserve, possibly because of species/genus ratios and species composition between these tropical areas. Although the core barcodes rbcL and matK were not suitable for species identification of tropical trees from Xishuangbanna Nature Reserve, they could still help with identification at the family and genus level. Considering the relative sequence recovery and the species identification performance, we recommend the use of trnH-psbA and ITS in combination as the preferred barcodes for tropical tree species identification in China.
NASA Astrophysics Data System (ADS)
Roşca, S.; Bilaşco, Ş.; Petrea, D.; Fodorean, I.; Vescan, I.; Filip, S.; Măguţ, F.-L.
2015-11-01
The existence of a large number of GIS models for the identification of landslide occurrence probability makes difficult the selection of a specific one. The present study focuses on the application of two quantitative models: the logistic and the BSA models. The comparative analysis of the results aims at identifying the most suitable model. The territory corresponding to the Niraj Mic Basin (87 km2) is an area characterised by a wide variety of the landforms with their morphometric, morphographical and geological characteristics as well as by a high complexity of the land use types where active landslides exist. This is the reason why it represents the test area for applying the two models and for the comparison of the results. The large complexity of input variables is illustrated by 16 factors which were represented as 72 dummy variables, analysed on the basis of their importance within the model structures. The testing of the statistical significance corresponding to each variable reduced the number of dummy variables to 12 which were considered significant for the test area within the logistic model, whereas for the BSA model all the variables were employed. The predictability degree of the models was tested through the identification of the area under the ROC curve which indicated a good accuracy (AUROC = 0.86 for the testing area) and predictability of the logistic model (AUROC = 0.63 for the validation area).
Statistical genetics concepts and approaches in schizophrenia and related neuropsychiatric research.
Schork, Nicholas J; Greenwood, Tiffany A; Braff, David L
2007-01-01
Statistical genetics is a research field that focuses on mathematical models and statistical inference methodologies that relate genetic variations (ie, naturally occurring human DNA sequence variations or "polymorphisms") to particular traits or diseases (phenotypes) usually from data collected on large samples of families or individuals. The ultimate goal of such analysis is the identification of genes and genetic variations that influence disease susceptibility. Although of extreme interest and importance, the fact that many genes and environmental factors contribute to neuropsychiatric diseases of public health importance (eg, schizophrenia, bipolar disorder, and depression) complicates relevant studies and suggests that very sophisticated mathematical and statistical modeling may be required. In addition, large-scale contemporary human DNA sequencing and related projects, such as the Human Genome Project and the International HapMap Project, as well as the development of high-throughput DNA sequencing and genotyping technologies have provided statistical geneticists with a great deal of very relevant and appropriate information and resources. Unfortunately, the use of these resources and their interpretation are not straightforward when applied to complex, multifactorial diseases such as schizophrenia. In this brief and largely nonmathematical review of the field of statistical genetics, we describe many of the main concepts, definitions, and issues that motivate contemporary research. We also provide a discussion of the most pressing contemporary problems that demand further research if progress is to be made in the identification of genes and genetic variations that predispose to complex neuropsychiatric diseases.
Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data
NASA Astrophysics Data System (ADS)
Fries, K. J.; Kerkez, B.
2017-12-01
We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.
Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju
2010-07-01
Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology
Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted
2014-01-01
The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966
Systematic Identification of Combinatorial Drivers and Targets in Cancer Cell Lines
Tabchy, Adel; Eltonsy, Nevine; Housman, David E.; Mills, Gordon B.
2013-01-01
There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance. PMID:23577104
Systematic identification of combinatorial drivers and targets in cancer cell lines.
Tabchy, Adel; Eltonsy, Nevine; Housman, David E; Mills, Gordon B
2013-01-01
There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance.
NASA Astrophysics Data System (ADS)
Liu, Hongna; Li, Song; Wang, Zhifei; Li, Zhiyang; Deng, Yan; Wang, Hua; Shi, Zhiyang; He, Nongyue
2008-11-01
Single nucleotide polymorphisms (SNPs) comprise the most abundant source of genetic variation in the human genome wide codominant SNPs identification. Therefore, large-scale codominant SNPs identification, especially for those associated with complex diseases, has induced the need for completely high-throughput and automated SNP genotyping method. Herein, we present an automated detection system of SNPs based on two kinds of functional magnetic nanoparticles (MNPs) and dual-color hybridization. The amido-modified MNPs (NH 2-MNPs) modified with APTES were used for DNA extraction from whole blood directly by electrostatic reaction, and followed by PCR, was successfully performed. Furthermore, biotinylated PCR products were captured on the streptavidin-coated MNPs (SA-MNPs) and interrogated by hybridization with a pair of dual-color probes to determine SNP, then the genotype of each sample can be simultaneously identified by scanning the microarray printed with the denatured fluorescent probes. This system provided a rapid, sensitive and highly versatile automated procedure that will greatly facilitate the analysis of different known SNPs in human genome.
Event Management of RFID Data Streams: Fast Moving Consumer Goods Supply Chains
NASA Astrophysics Data System (ADS)
Mo, John P. T.; Li, Xue
Radio Frequency Identification (RFID) is a wireless communication technology that uses radio-frequency waves to transfer information between tagged objects and readers without line of sight. This creates tremendous opportunities for linking real world objects into a world of "Internet of things". Application of RFID to Fast Moving Consumer Goods sector will introduce billions of RFID tags in the world. Almost everything is tagged for tracking and identification purposes. This phenomenon will impose a new challenge not only to the network capacity but also to the scalability of processing of RFID events and data. This chapter uses two national demonstrator projects in Australia as case studies to introduce an event managementframework to process high volume RFID data streams in real time and automatically transform physical RFID observations into business-level events. The model handles various temporal event patterns, both simple and complex, with temporal constraints. The model can be implemented in a data management architecture that allows global RFID item tracking and enables fast, large-scale RFID deployment.
NASA Astrophysics Data System (ADS)
Fanood, Mohammad M. Rafiee; Ram, N. Bhargava; Lehmann, C. Stefan; Powis, Ivan; Janssen, Maurice H. M.
2015-06-01
Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how enantiomers may be differentiated by mass-selected photoelectron circular dichroism using an electron-ion coincidence imaging spectrometer. As proof of concept, vapours containing ~1% of two chiral monoterpene molecules, limonene and camphor, are irradiated by a circularly polarized femtosecond laser, resulting in multiphoton near-threshold ionization with little molecular fragmentation. Large chiral asymmetries (2-4%) are observed in the mass-tagged photoelectron angular distributions. These asymmetries switch sign according to the handedness (R- or S-) of the enantiomer in the mixture and scale with enantiomeric excess of a component. The results demonstrate that mass spectrometric identification of mixtures of chiral molecules and quantitative determination of enantiomeric excess can be achieved in a table-top instrument.
An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.
Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao
2016-09-01
The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.
Fanood, Mohammad M Rafiee; Ram, N. Bhargava; Lehmann, C. Stefan; Powis, Ivan; Janssen, Maurice H. M.
2015-01-01
Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how enantiomers may be differentiated by mass-selected photoelectron circular dichroism using an electron–ion coincidence imaging spectrometer. As proof of concept, vapours containing ∼1% of two chiral monoterpene molecules, limonene and camphor, are irradiated by a circularly polarized femtosecond laser, resulting in multiphoton near-threshold ionization with little molecular fragmentation. Large chiral asymmetries (2–4%) are observed in the mass-tagged photoelectron angular distributions. These asymmetries switch sign according to the handedness (R- or S-) of the enantiomer in the mixture and scale with enantiomeric excess of a component. The results demonstrate that mass spectrometric identification of mixtures of chiral molecules and quantitative determination of enantiomeric excess can be achieved in a table-top instrument. PMID:26104140
[Twenty five years of HIV virus].
Nagy, Károly; Horváth, Attila
2010-01-24
At the 25th anniversary of the identification of HIV virus as the causative agent of AIDS, virologist and clinician authors provide an overview of the discovery and identification of HIV, its significance in the development of clinical diagnosis of HIV/AIDS, which led to the development of effective antiretroviral treatment. Besides the epidemiological and sociological aspects of the infection, authors provide a detailed chronology of the special aspects of the fight against HIV/AIDS in Hungary, from the diagnosis of the first HIV and AIDS cases, through the establishment of the nationwide screening network and counseling units to the appearance of drug resistant virus mutants, and the recent penetration of African HIV strains to the country. Further actions are urged locally and worldwide for the better understanding the interactions of the human host organism and the HIV virus for the more effective treatment. For these political consensuses, a large scale long term financial support, views based on scientific and public health evidences, and cooperation of the whole society worldwide are needed.
Technological advances in bovine mastitis diagnosis: an overview.
Duarte, Carla M; Freitas, Paulo P; Bexiga, Ricardo
2015-11-01
Bovine mastitis is an economic burden for dairy farmers and preventive control measures are crucial for the sustainability of any dairy business. The identification of etiological agents is necessary in controlling the disease, reducing risk of chronic infections and targeting antimicrobial therapy. The suitability of a detection method for routine diagnosis depends on several factors, including specificity, sensitivity, cost, time in producing results, and suitability for large-scale sampling of milk. This article focuses on current methodologies for identification of mastitis pathogens and for detection of inflammation, as well as the advantages and disadvantages of different methods. Emerging technologies, such as transcriptome and proteome analyses and nano- and microfabrication of portable devices, offer promising, sensitive methods for advanced detection of mastitis pathogens and biomarkers of inflammation. The demand for alternative, fast, and reliable diagnostic procedures is rising as farms become bigger. Several examples of technological and scientific advances are summarized which have given rise to more sensitive, reliable and faster diagnostic results. © 2015 The Author(s).
High-throughput screening of a CRISPR/Cas9 library for functional genomics in human cells.
Zhou, Yuexin; Zhu, Shiyou; Cai, Changzu; Yuan, Pengfei; Li, Chunmei; Huang, Yanyi; Wei, Wensheng
2014-05-22
Targeted genome editing technologies are powerful tools for studying biology and disease, and have a broad range of research applications. In contrast to the rapid development of toolkits to manipulate individual genes, large-scale screening methods based on the complete loss of gene expression are only now beginning to be developed. Here we report the development of a focused CRISPR/Cas-based (clustered regularly interspaced short palindromic repeats/CRISPR-associated) lentiviral library in human cells and a method of gene identification based on functional screening and high-throughput sequencing analysis. Using knockout library screens, we successfully identified the host genes essential for the intoxication of cells by anthrax and diphtheria toxins, which were confirmed by functional validation. The broad application of this powerful genetic screening strategy will not only facilitate the rapid identification of genes important for bacterial toxicity but will also enable the discovery of genes that participate in other biological processes.
Mahajan, Anubha; Wessel, Jennifer; Willems, Sara M; Zhao, Wei; Robertson, Neil R; Chu, Audrey Y; Gan, Wei; Kitajima, Hidetoshi; Taliun, Daniel; Rayner, N William; Guo, Xiuqing; Lu, Yingchang; Li, Man; Jensen, Richard A; Hu, Yao; Huo, Shaofeng; Lohman, Kurt K; Zhang, Weihua; Cook, James P; Prins, Bram Peter; Flannick, Jason; Grarup, Niels; Trubetskoy, Vassily Vladimirovich; Kravic, Jasmina; Kim, Young Jin; Rybin, Denis V; Yaghootkar, Hanieh; Müller-Nurasyid, Martina; Meidtner, Karina; Li-Gao, Ruifang; Varga, Tibor V; Marten, Jonathan; Li, Jin; Smith, Albert Vernon; An, Ping; Ligthart, Symen; Gustafsson, Stefan; Malerba, Giovanni; Demirkan, Ayse; Tajes, Juan Fernandez; Steinthorsdottir, Valgerdur; Wuttke, Matthias; Lecoeur, Cécile; Preuss, Michael; Bielak, Lawrence F; Graff, Marielisa; Highland, Heather M; Justice, Anne E; Liu, Dajiang J; Marouli, Eirini; Peloso, Gina Marie; Warren, Helen R; Afaq, Saima; Afzal, Shoaib; Ahlqvist, Emma; Almgren, Peter; Amin, Najaf; Bang, Lia B; Bertoni, Alain G; Bombieri, Cristina; Bork-Jensen, Jette; Brandslund, Ivan; Brody, Jennifer A; Burtt, Noël P; Canouil, Mickaël; Chen, Yii-Der Ida; Cho, Yoon Shin; Christensen, Cramer; Eastwood, Sophie V; Eckardt, Kai-Uwe; Fischer, Krista; Gambaro, Giovanni; Giedraitis, Vilmantas; Grove, Megan L; de Haan, Hugoline G; Hackinger, Sophie; Hai, Yang; Han, Sohee; Tybjærg-Hansen, Anne; Hivert, Marie-France; Isomaa, Bo; Jäger, Susanne; Jørgensen, Marit E; Jørgensen, Torben; Käräjämäki, Annemari; Kim, Bong-Jo; Kim, Sung Soo; Koistinen, Heikki A; Kovacs, Peter; Kriebel, Jennifer; Kronenberg, Florian; Läll, Kristi; Lange, Leslie A; Lee, Jung-Jin; Lehne, Benjamin; Li, Huaixing; Lin, Keng-Hung; Linneberg, Allan; Liu, Ching-Ti; Liu, Jun; Loh, Marie; Mägi, Reedik; Mamakou, Vasiliki; McKean-Cowdin, Roberta; Nadkarni, Girish; Neville, Matt; Nielsen, Sune F; Ntalla, Ioanna; Peyser, Patricia A; Rathmann, Wolfgang; Rice, Kenneth; Rich, Stephen S; Rode, Line; Rolandsson, Olov; Schönherr, Sebastian; Selvin, Elizabeth; Small, Kerrin S; Stančáková, Alena; Surendran, Praveen; Taylor, Kent D; Teslovich, Tanya M; Thorand, Barbara; Thorleifsson, Gudmar; Tin, Adrienne; Tönjes, Anke; Varbo, Anette; Witte, Daniel R; Wood, Andrew R; Yajnik, Pranav; Yao, Jie; Yengo, Loïc; Young, Robin; Amouyel, Philippe; Boeing, Heiner; Boerwinkle, Eric; Bottinger, Erwin P; Chowdhury, Rajiv; Collins, Francis S; Dedoussis, George; Dehghan, Abbas; Deloukas, Panos; Ferrario, Marco M; Ferrières, Jean; Florez, Jose C; Frossard, Philippe; Gudnason, Vilmundur; Harris, Tamara B; Heckbert, Susan R; Howson, Joanna M M; Ingelsson, Martin; Kathiresan, Sekar; Kee, Frank; Kuusisto, Johanna; Langenberg, Claudia; Launer, Lenore J; Lindgren, Cecilia M; Männistö, Satu; Meitinger, Thomas; Melander, Olle; Mohlke, Karen L; Moitry, Marie; Morris, Andrew D; Murray, Alison D; de Mutsert, Renée; Orho-Melander, Marju; Owen, Katharine R; Perola, Markus; Peters, Annette; Province, Michael A; Rasheed, Asif; Ridker, Paul M; Rivadineira, Fernando; Rosendaal, Frits R; Rosengren, Anders H; Salomaa, Veikko; Sheu, Wayne H-H; Sladek, Rob; Smith, Blair H; Strauch, Konstantin; Uitterlinden, André G; Varma, Rohit; Willer, Cristen J; Blüher, Matthias; Butterworth, Adam S; Chambers, John Campbell; Chasman, Daniel I; Danesh, John; van Duijn, Cornelia; Dupuis, Josée; Franco, Oscar H; Franks, Paul W; Froguel, Philippe; Grallert, Harald; Groop, Leif; Han, Bok-Ghee; Hansen, Torben; Hattersley, Andrew T; Hayward, Caroline; Ingelsson, Erik; Kardia, Sharon L R; Karpe, Fredrik; Kooner, Jaspal Singh; Köttgen, Anna; Kuulasmaa, Kari; Laakso, Markku; Lin, Xu; Lind, Lars; Liu, Yongmei; Loos, Ruth J F; Marchini, Jonathan; Metspalu, Andres; Mook-Kanamori, Dennis; Nordestgaard, Børge G; Palmer, Colin N A; Pankow, James S; Pedersen, Oluf; Psaty, Bruce M; Rauramaa, Rainer; Sattar, Naveed; Schulze, Matthias B; Soranzo, Nicole; Spector, Timothy D; Stefansson, Kari; Stumvoll, Michael; Thorsteinsdottir, Unnur; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Wareham, Nicholas J; Wilson, James G; Zeggini, Eleftheria; Scott, Robert A; Barroso, Inês; Frayling, Timothy M; Goodarzi, Mark O; Meigs, James B; Boehnke, Michael; Saleheen, Danish; Morris, Andrew P; Rotter, Jerome I; McCarthy, Mark I
2018-04-01
We aggregated coding variant data for 81,412 type 2 diabetes cases and 370,832 controls of diverse ancestry, identifying 40 coding variant association signals (P < 2.2 × 10 -7 ); of these, 16 map outside known risk-associated loci. We make two important observations. First, only five of these signals are driven by low-frequency variants: even for these, effect sizes are modest (odds ratio ≤1.29). Second, when we used large-scale genome-wide association data to fine-map the associated variants in their regional context, accounting for the global enrichment of complex trait associations in coding sequence, compelling evidence for coding variant causality was obtained for only 16 signals. At 13 others, the associated coding variants clearly represent 'false leads' with potential to generate erroneous mechanistic inference. Coding variant associations offer a direct route to biological insight for complex diseases and identification of validated therapeutic targets; however, appropriate mechanistic inference requires careful specification of their causal contribution to disease predisposition.
Ontology-Based High-Level Context Inference for Human Behavior Identification
Villalonga, Claudia; Razzaq, Muhammad Asif; Khan, Wajahat Ali; Pomares, Hector; Rojas, Ignacio; Lee, Sungyoung; Banos, Oresti
2016-01-01
Recent years have witnessed a huge progress in the automatic identification of individual primitives of human behavior, such as activities or locations. However, the complex nature of human behavior demands more abstract contextual information for its analysis. This work presents an ontology-based method that combines low-level primitives of behavior, namely activity, locations and emotions, unprecedented to date, to intelligently derive more meaningful high-level context information. The paper contributes with a new open ontology describing both low-level and high-level context information, as well as their relationships. Furthermore, a framework building on the developed ontology and reasoning models is presented and evaluated. The proposed method proves to be robust while identifying high-level contexts even in the event of erroneously-detected low-level contexts. Despite reasonable inference times being obtained for a relevant set of users and instances, additional work is required to scale to long-term scenarios with a large number of users. PMID:27690050
Shape coexistence from lifetime and branching-ratio measurements in 68,70Ni
Crider, B. P.; Prokop, C. J.; Liddick, S. N.; ...
2016-10-15
Shape coexistence near closed-shell nuclei, whereby states associated with deformed shapes appear at relatively low excitation energy alongside spherical ones, is indicative of the rapid change in structure that can occur with the addition or removal of a few protons or neutrons. Near 68Ni (Z=28, N=40), the identification of shape coexistence hinges on hitherto undetermined transition rates to and from low-energy 0 + states. In 68,70Ni, new lifetimes and branching ratios have been measured. These data enable quantitative descriptions of the 0 + states through the deduced transition rates and serve as sensitive probes for characterizing their nuclear wave functions.more » The results are compared to, and consistent with, large-scale shell-model calculations which predict shape coexistence. With the firm identification of this phenomenon near 68Ni, shape coexistence is now observed in all currently accessible regions of the nuclear chart with closed proton shells and mid-shell neutrons.« less
2015-01-01
Large-scale proteomics often employs two orthogonal separation methods to fractionate complex peptide mixtures. Fractionation can involve ion exchange separation coupled to reversed-phase separation or, more recently, two reversed-phase separations performed at different pH values. When multidimensional separations are combined with tandem mass spectrometry for protein identification, the strategy is often referred to as multidimensional protein identification technology (MudPIT). MudPIT has been used in either an automated (online) or manual (offline) format. In this study, we evaluated the performance of different MudPIT strategies by both label-free and tandem mass tag (TMT) isobaric tagging. Our findings revealed that online MudPIT provided more peptide/protein identifications and higher sequence coverage than offline platforms. When employing an off-line fractionation method with direct loading of samples onto the column from an eppendorf tube via a high-pressure device, a 5.3% loss in protein identifications is observed. When off-line fractionated samples are loaded via an autosampler, a 44.5% loss in protein identifications is observed compared with direct loading of samples onto a triphasic capillary column. Moreover, peptide recovery was significantly lower after offline fractionation than in online fractionation. Signal-to-noise (S/N) ratio, however, was not significantly altered between experimental groups. It is likely that offline sample collection results in stochastic peptide loss due to noncovalent adsorption to solid surfaces. Therefore, the use of the offline approaches should be considered carefully when processing minute quantities of valuable samples. PMID:25040086
Fleischmann, Fenella; Phalet, Karen; Klein, Olivier
2011-12-01
Taking an approach from religion as a social identity and using large-scale comparative surveys in five European cities, we investigate when and how perceived discrimination is associated with religious identification and politicization among the second generation of Turkish and Moroccan Muslims. We distinguish support for political Islam from political action as distinct forms of politicization. In addition, we test the mediating role of religious identification in processes of politicization. Study 1 estimates multi-group structural equation models of support for political Islam in Belgium, the Netherlands, and Sweden. In line with a social identity model of politicization and across nine inter-group contexts, Muslims who perceived more discrimination identified (even) more strongly as Muslims; and high Muslim identifiers were most ready to support political Islam. In support of a competing social stigma hypothesis, however, negative direct and total effects of perceived discrimination suggest predominant depoliticization. Using separate sub-samples across four inter-group contexts in Belgium, Study 2 adds political action tendencies as a distinct form of politicization. Whereas religious identification positively predicts both forms of politicization, perceived discrimination has differential effects: Muslims who perceived more discrimination were more weary of supporting political Islam, yet more ready to engage in political action to defend Islamic values. Taken together, the studies reveal that some Muslim citizens will politicize and others will depoliticize in the face of discrimination as a function of their religious identification and of prevailing forms of politicization. ©2011 The British Psychological Society.
White matter tract signatures of impaired social cognition in frontotemporal lobar degeneration
Downey, Laura E.; Mahoney, Colin J.; Buckley, Aisling H.; Golden, Hannah L.; Henley, Susie M.; Schmitz, Nicole; Schott, Jonathan M.; Simpson, Ivor J.; Ourselin, Sebastien; Fox, Nick C.; Crutch, Sebastian J.; Warren, Jason D.
2015-01-01
Impairments of social cognition are often leading features in frontotemporal lobar degeneration (FTLD) and likely to reflect large-scale brain network disintegration. However, the neuroanatomical basis of impaired social cognition in FTLD and the role of white matter connections have not been defined. Here we assessed social cognition in a cohort of patients representing two core syndromes of FTLD, behavioural variant frontotemporal dementia (bvFTD; n = 29) and semantic variant primary progressive aphasia (svPPA; n = 15), relative to healthy older individuals (n = 37) using two components of the Awareness of Social Inference Test, canonical emotion identification and sarcasm identification. Diffusion tensor imaging (DTI) was used to derive white matter tract correlates of social cognition performance and compared with the distribution of grey matter atrophy on voxel-based morphometry. The bvFTD and svPPA groups showed comparably severe deficits for identification of canonical emotions and sarcasm, and these deficits were correlated with distributed and overlapping white matter tract alterations particularly affecting frontotemporal connections in the right cerebral hemisphere. The most robust DTI associations were identified in white matter tracts linking cognitive and evaluative processing with emotional responses: anterior thalamic radiation, fornix (emotion identification) and uncinate fasciculus (sarcasm identification). DTI associations of impaired social cognition were more consistent than corresponding grey matter associations. These findings delineate a brain network substrate for the social impairment that characterises FTLD syndromes. The findings further suggest that DTI can generate sensitive and functionally relevant indexes of white matter damage in FTLD, with potential to transcend conventional syndrome boundaries. PMID:26236629
NASA Technical Reports Server (NTRS)
Baker, V. R. (Principal Investigator); Holz, R. K.; Hulke, S. D.; Patton, P. C.; Penteado, M. M.
1975-01-01
The author has identified the following significant results. Development of a quantitative hydrogeomorphic approach to flood hazard evaluation was hindered by (1) problems of resolution and definition of the morphometric parameters which have hydrologic significance, and (2) mechanical difficulties in creating the necessary volume of data for meaningful analysis. Measures of network resolution such as drainage density and basin Shreve magnitude indicated that large scale topographic maps offered greater resolution than small scale suborbital imagery and orbital imagery. The disparity in network resolution capabilities between orbital and suborbital imagery formats depends on factors such as rock type, vegetation, and land use. The problem of morphometric data analysis was approached by developing a computer-assisted method for network analysis. The system allows rapid identification of network properties which can then be related to measures of flood response.
Muon reconstruction performance of the ATLAS detector in proton–proton collision data at √s=13 TeV
Aad, G.; Abbott, B.; Abdallah, J.; ...
2016-05-23
This article documents the performance of the ATLAS muon identification and reconstruction using the LHC dataset recorded at √s=13 TeV in 2015. Using a large sample of J/ψ → μμ and Z → μμ decays from 3.2 fb -1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. Furthermore, the reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| < 2.5 and 52.2 , the p T resolution for muons from Z → μμ decaysmore » is 2.9% while the precision of the momentum scale for low-p T muons from J/ψ → μμ decays is about 0.2% .« less
Identification of novel diagnostic biomarkers for thyroid carcinoma
Wang, Xiliang; Zhang, Qing; Cai, Zhiming; Dai, Yifan; Mou, Lisha
2017-01-01
Thyroid carcinoma (THCA) is the most universal endocrine malignancy worldwide. Unfortunately, a limited number of large-scale analyses have been performed to identify biomarkers for THCA. Here, we conducted a meta-analysis using 505 THCA patients and 59 normal controls from The Cancer Genome Atlas. After identifying differentially expressed long non-coding RNA (lncRNA) and protein coding genes (PCG), we found vast difference in various lncRNA-PCG co-expressed pairs in THCA. A dysregulation network with scale-free topology was constructed. Four molecules (LA16c-380H5.2, RP11-203J24.8, MLF1 and SDC4) could potentially serve as diagnostic biomarkers of THCA with high sensitivity and specificity. We further represent a diagnostic panel with expression cutoff values. Our results demonstrate the potential application of those four molecules as novel independent biomarkers for THCA diagnosis. PMID:29340074
Division Viii: Galaxies and the Universe
NASA Astrophysics Data System (ADS)
Sadler, Elaine M.; Combes, Françoise; Okamura, Sadanori; Davies, Roger L.; Gallagher, John S.; Padmanabhan, Thanu; Schmidt, Brian P.
2012-04-01
The fields of extragalactic research and cosmology have continued to progress rapidly over the past three years, as detailed in the reports of the Commission Presidents, and we are pleased to acknowledge the award of the 2011 Nobel Prize in Physics to Saul Perlmutter, Brian P. Schmidt and Adam G. Riess for ``the discovery of the accelerating expansion of the Universe through observations of distant supernovae''. The Gruber Cosmology Prize was awarded in 2009 to Wendy L. Freedman, Robert C. Kennicutt and Jeremy Mould for their leadership of the Hubble Space Telescope Key Project on the Extragalactic Distance Scale, in 2010 to Charles Steidel for the identification and study of galaxies in the very distant universe, and in 2011 to Marc Davis, George Efstathiou, Carlos Frenk and Simon D.M. White for pioneering the use of numerical simulations as a tool to model and interpret the large-scale distribution of galaxies and dark matter.
k-filtering applied to Cluster density measurements in the Solar Wind: Early findings
NASA Astrophysics Data System (ADS)
Jeska, Lauren; Roberts, Owen; Li, Xing
2014-05-01
Studies of solar wind turbulence indicate that a large proportion of the energy is Alfvénic (incompressible) at inertial scales. The properties of the turbulence found in the dissipation range are still under debate ~ while it is widely believed that kinetic Alfvén waves form the dominant component, the constituents of the remaining compressible turbulence are disputed. Using k-filtering, the power can be measured without assuming the validity of Taylor's hypothesis, and its distribution in (ω, k)-space can be determined to assist the identification of weak turbulence components. This technique is applied to Cluster electron density measurements and compared to the power in |B(t)|. As the direct electron density measurements from the WHISPER instrument have a low cadency of only 2.2s, proxy data derived from the spacecraft potential, measured every 0.2s by the EFW instrument, are used to extend this study to ion scales.
Magnetic storm generation by large-scale complex structure Sheath/ICME
NASA Astrophysics Data System (ADS)
Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.
2017-12-01
We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis, J. Geophys. Res. Space Physics, 120, doi:10.1002/2015JA021274 5. Yermolaev Y. I., I. G. Lodkina, N. S. Nikolaeva, M. Y. Yermolaev, M. O. Riazantseva (2017), Some Problems of Identification of Large-Scale Solar Wind types and Their Role in the Physics of the Magnetosphere, Cosmic Res., 55(3), pp. 178-189. DOI: 10.1134/S0010952517030029
The zebrafish as a model system to study cardiovascular development.
Stainier, D Y; Fishman, M C
1994-01-01
The zebrafish, Brachydanio rerio, is rapidly becoming a system of choice for vertebrate developmental biologists. It presents unique embryological attributes and is amenable to saturation style mutagenesis, a powerful approach that, in invertebrates, has already led to the identification of a large number of key developmental genes. Since fertilization is external, the zebrafish embryo develops in the dish and is thus accessible for continued observation and manipulation at all stages of development. Furthermore, because the embryo is transparent, the developing heart and vessels can be resolved at the single-cell level. A large number of mutations that affect the development of cardiovascular form and function have recently been isolated from large-scale genetic screens for zygotic embryonic lethals. Our further understanding of the development of the cardiovascular system is important not only because of the high incidence, and familial inheritance, of congenital abnormalities, but also because it should lead to novel, differentiation-based strategies for the analysis and therapy of the diseased state. Copyright © 1994. Published by Elsevier Inc.
Hannaford, Elizabeth; Moore, Fhionna; Macleod, Fiona J
2017-10-11
Stressful life events (SLEs) have been linked to depression, anxiety, and reduced life satisfaction. The inoculation hypothesis of aging suggests older adults may be less vulnerable to poor psychological outcomes following SLEs than working-age adults. The current study compared relationships between SLEs, mood and life satisfaction among older adults (65+), and adults aged 50-64, and investigated whether group identification and loneliness moderate these relationships. A community-based sample of 121 Scottish participants responded to measures of SLEs (modified Social Readjustment Rating Scale), symptoms of depression and anxiety (Hospital Anxiety and Depression Scale), life satisfaction (Life Satisfaction Index A), group identification (Group Identification Scale), and loneliness (UCLA Loneliness Scale). In the 50-64 age group, the number of SLEs was significantly associated with greater symptoms of depression and anxiety, and reduced life satisfaction. Group identification and loneliness did not moderate these relationships. There were no significant relationships in the older adult group. The finding of relationships in working-age, but not older adults, supports the inoculation hypothesis of aging. Further research to better understand changes across the lifespan, and inter-relationships with related variables, would be valuable from both theoretical and clinical perspectives.
Identification Damage Model for Thermomechanical Degradation of Ductile Heterogeneous Materials
NASA Astrophysics Data System (ADS)
Amri, A. El; Yakhloufi, M. H. El; Khamlichi, A.
2017-05-01
The failure of ductile materials subject to high thermal and mechanical loading rates is notably affected by material inertia. The mechanisms of fatigue-crack propagation are examined with particular emphasis on the similarities and differences between cyclic crack growth in ductile materials, such as metals, and corresponding behavior in brittle materials, such as intermetallic and ceramics. Numerical simulations of crack propagation in a cylindrical specimen demonstrate that the proposed method provides an effective means to simulate ductile fracture in large scale cylindrical structures with engineering accuracy. The influence of damage on the intensity of the destruction of materials is studied as well.
Detergent Lysis of Animal Tissues for Immunoprecipitation.
DeCaprio, James; Kohl, Thomas O
2017-12-01
This protocol details protein extraction from mouse tissues for immunoprecipitation purposes and has been applied for the performance of large-scale immunoprecipitations of target proteins from various tissues for the identification of associated proteins by mass spectroscopy. The key factors in performing a successful immunoprecipitation directly relate to the abundance of target protein in a particular tissue type and whether or not the embryonic, newborn, or adult mouse-derived tissues contain fibrous and other insoluble material. Several tissue types, including lung and liver as well as carcinomas, contain significant amounts of fibrous tissue that can interfere with an immunoprecipitation. © 2017 Cold Spring Harbor Laboratory Press.
NASA Technical Reports Server (NTRS)
Wier, C. E.; Wobber, F. J. (Principal Investigator); Russell, O. R.; Amato, R. V.
1973-01-01
The author has identified the following significant results. The 70mm black and white infrared photography acquired in March 1973 at an approximate scale of 1:115,000 permits the identification of areas of mine subsidence not readily evident on other films. This is largely due to the high contrast rendition of water and land by this film and the excessive surface moisture conditions prevalent in the area at the time of photography. Subsided areas consist of shallow depressions which have impounded water. Patterns with a regularity indicative of the room and pillar configuration used in subsurface coal mining are evident.
Fortunato, Santo; Bergstrom, Carl T; Börner, Katy; Evans, James A; Helbing, Dirk; Milojević, Staša; Petersen, Alexander M; Radicchi, Filippo; Sinatra, Roberta; Uzzi, Brian; Vespignani, Alessandro; Waltman, Ludo; Wang, Dashun; Barabási, Albert-László
2018-03-02
Identifying fundamental drivers of science and developing predictive models to capture its evolution are instrumental for the design of policies that can improve the scientific enterprise-for example, through enhanced career paths for scientists, better performance evaluation for organizations hosting research, discovery of novel effective funding vehicles, and even identification of promising regions along the scientific frontier. The science of science uses large-scale data on the production of science to search for universal and domain-specific patterns. Here, we review recent developments in this transdisciplinary field. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Mapping cumulative noise from shipping to inform marine spatial planning.
Erbe, Christine; MacGillivray, Alexander; Williams, Rob
2012-11-01
Including ocean noise in marine spatial planning requires predictions of noise levels on large spatiotemporal scales. Based on a simple sound transmission model and ship track data (Automatic Identification System, AIS), cumulative underwater acoustic energy from shipping was mapped throughout 2008 in the west Canadian Exclusive Economic Zone, showing high noise levels in critical habitats for endangered resident killer whales, exceeding limits of "good conservation status" under the EU Marine Strategy Framework Directive. Error analysis proved that rough calculations of noise occurrence and propagation can form a basis for management processes, because spending resources on unnecessary detail is wasteful and delays remedial action.
Li, Xiaofan; Xia, Zhenyao; Tang, Jianqiang; Wu, Jiahui; Tong, Jing; Li, Mengjie; Ju, Jianhua; Chen, Huirong; Wang, Liyan
2017-08-04
Chemical epigenetic manipulation was applied to a deep marine-derived fungus, Aspergillus sp. SCSIOW3, resulting in significant changes of the secondary metabolites. One new diphenylether- O -glycoside (diorcinol 3- O -α-D-ribofuranoside), along with seven known compounds, were isolated from the culture treated with a combination of histone deacetylase inhibitor (suberohydroxamic acid) and DNA methyltransferase inhibitor (5-azacytidine). Compounds 2 and 4 exhibited significant biomembrane protective effect of erythrocytes. 2 also showed algicidal activity against Chattonella marina , a bloom forming alga responsible for large scale fish deaths.
Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.
Schena, M; Shalon, D; Heller, R; Chai, A; Brown, P O; Davis, R W
1996-01-01
Microarrays containing 1046 human cDNAs of unknown sequence were printed on glass with high-speed robotics. These 1.0-cm2 DNA "chips" were used to quantitatively monitor differential expression of the cognate human genes using a highly sensitive two-color hybridization assay. Array elements that displayed differential expression patterns under given experimental conditions were characterized by sequencing. The identification of known and novel heat shock and phorbol ester-regulated genes in human T cells demonstrates the sensitivity of the assay. Parallel gene analysis with microarrays provides a rapid and efficient method for large-scale human gene discovery. Images Fig. 1 Fig. 2 Fig. 3 PMID:8855227
TUBEs-Mass Spectrometry for Identification and Analysis of the Ubiquitin-Proteome.
Azkargorta, Mikel; Escobes, Iraide; Elortza, Felix; Matthiesen, Rune; Rodríguez, Manuel S
2016-01-01
Mass spectrometry (MS) has become the method of choice for the large-scale analysis of protein ubiquitylation. There exist a number of proposed methods for mapping ubiquitin sites, each with different pros and cons. We present here a protocol for the MS analysis of the ubiquitin-proteome captured by TUBEs and subsequent data analysis. Using dedicated software and algorithms, specific information on the presence of ubiquitylated peptides can be obtained from the MS search results. In addition, a quantitative and functional analysis of the ubiquitylated proteins and their interacting partners helps to unravel the biological and molecular processes they are involved in.
A large scale virtual screen of DprE1.
Wilsey, Claire; Gurka, Jessica; Toth, David; Franco, Jimmy
2013-12-01
Tuberculosis continues to plague the world with the World Health Organization estimating that about one third of the world's population is infected. Due to the emergence of MDR and XDR strains of TB, the need for novel therapeutics has become increasing urgent. Herein we report the results of a virtual screen of 4.1 million compounds against a promising drug target, DrpE1. The virtual compounds were obtained from the Zinc docking site and screened using the molecular docking program, AutoDock Vina. The computational hits have led to the identification of several promising lead compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Piniewski, Mikołaj
2016-05-01
The objective of this study was to apply a previously developed large-scale and high-resolution SWAT model of the Vistula and the Odra basins, calibrated with the focus of natural flow simulation, in order to assess the impact of three different dam reservoirs on streamflow using the Indicators of Hydrologic Alteration (IHA). A tailored spatial calibration approach was designed, in which calibration was focused on a large set of relatively small non-nested sub-catchments with semi-natural flow regime. These were classified into calibration clusters based on the flow statistics similarity. After performing calibration and validation that gave overall positive results, the calibrated parameter values were transferred to the remaining part of the basins using an approach based on hydrological similarity of donor and target catchments. The calibrated model was applied in three case studies with the purpose of assessing the effect of dam reservoirs (Włocławek, Siemianówka and Czorsztyn Reservoirs) on streamflow alteration. Both the assessment based on gauged streamflow (Before-After design) and the one based on simulated natural streamflow showed large alterations in selected flow statistics related to magnitude, duration, high and low flow pulses and rate of change. Some benefits of using a large-scale and high-resolution hydrological model for the assessment of streamflow alteration include: (1) providing an alternative or complementary approach to the classical Before-After designs, (2) isolating the climate variability effect from the dam (or any other source of alteration) effect, (3) providing a practical tool that can be applied at a range of spatial scales over large area such as a country, in a uniform way. Thus, presented approach can be applied for designing more natural flow regimes, which is crucial for river and floodplain ecosystem restoration in the context of the European Union's policy on environmental flows.
Piton, Amélie; Redin, Claire; Mandel, Jean-Louis
2013-08-08
Because of the unbalanced sex ratio (1.3-1.4 to 1) observed in intellectual disability (ID) and the identification of large ID-affected families showing X-linked segregation, much attention has been focused on the genetics of X-linked ID (XLID). Mutations causing monogenic XLID have now been reported in over 100 genes, most of which are commonly included in XLID diagnostic gene panels. Nonetheless, the boundary between true mutations and rare non-disease-causing variants often remains elusive. The sequencing of a large number of control X chromosomes, required for avoiding false-positive results, was not systematically possible in the past. Such information is now available thanks to large-scale sequencing projects such as the National Heart, Lung, and Blood (NHLBI) Exome Sequencing Project, which provides variation information on 10,563 X chromosomes from the general population. We used this NHLBI cohort to systematically reassess the implication of 106 genes proposed to be involved in monogenic forms of XLID. We particularly question the implication in XLID of ten of them (AGTR2, MAGT1, ZNF674, SRPX2, ATP6AP2, ARHGEF6, NXF5, ZCCHC12, ZNF41, and ZNF81), in which truncating variants or previously published mutations are observed at a relatively high frequency within this cohort. We also highlight 15 other genes (CCDC22, CLIC2, CNKSR2, FRMPD4, HCFC1, IGBP1, KIAA2022, KLF8, MAOA, NAA10, NLGN3, RPL10, SHROOM4, ZDHHC15, and ZNF261) for which replication studies are warranted. We propose that similar reassessment of reported mutations (and genes) with the use of data from large-scale human exome sequencing would be relevant for a wide range of other genetic diseases. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Tack, Jason D.; Fedy, Bradley C.
2015-01-01
Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development. PMID:26262876
Tack, Jason D.; Fedy, Bradley C.
2015-01-01
Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.
NASA Astrophysics Data System (ADS)
Scher, C.; Tennant, C.; Larsen, L.; Bellugi, D. G.
2016-12-01
Advances in remote-sensing technology allow for cost-effective, accurate, high-resolution mapping of river-channel topography and shallow aquatic bathymetry over large spatial scales. A combination of near-infrared and green spectra airborne laser swath mapping was used to map river channel bathymetry and watershed geometry over 90+ river-kilometers (75-1175 km2) of the Greys River in Wyoming. The day of flight wetted channel was identified from green LiDAR returns, and more than 1800 valley-bottom cross-sections were extracted at regular 50-m intervals. The bankfull channel geometry was identified using a "watershed-based" algorithm that incrementally filled local minima to a "spill" point, thereby constraining areas of local convergence and delineating all the potential channels along the cross-section for each distinct "spill stage." Multiple potential channels in alluvial floodplains and lack of clearly defined channel banks in bedrock reaches challenge identification of the bankfull channel based on topology alone. Here we combine a variety of topological measures, geometrical considerations, and stage levels to define a stage-dependent bankfull channel geometry, and compare the results with day of flight wetted channel data. Initial results suggest that channel hydraulic geometry and basin hydrology power-law scaling may not accurately capture downstream channel adjustments for rivers draining complex mountain topography.
The structure and large-scale organization of extreme cold waves over the conterminous United States
NASA Astrophysics Data System (ADS)
Xie, Zuowei; Black, Robert X.; Deng, Yi
2017-12-01
Extreme cold waves (ECWs) occurring over the conterminous United States (US) are studied through a systematic identification and documentation of their local synoptic structures, associated large-scale meteorological patterns (LMPs), and forcing mechanisms external to the US. Focusing on the boreal cool season (November-March) for 1950‒2005, a hierarchical cluster analysis identifies three ECW patterns, respectively characterized by cold surface air temperature anomalies over the upper midwest (UM), northwestern (NW), and southeastern (SE) US. Locally, ECWs are synoptically organized by anomalous high pressure and northerly flow. At larger scales, the UM LMP features a zonal dipole in the mid-tropospheric height field over North America, while the NW and SE LMPs each include a zonal wave train extending from the North Pacific across North America into the North Atlantic. The Community Climate System Model version 4 (CCSM4) in general simulates the three ECW patterns quite well and successfully reproduces the observed enhancements in the frequency of their associated LMPs. La Niña and the cool phase of the Pacific Decadal Oscillation (PDO) favor the occurrence of NW ECWs, while the warm PDO phase, low Arctic sea ice extent and high Eurasian snow cover extent (SCE) are associated with elevated SE-ECW frequency. Additionally, high Eurasian SCE is linked to increases in the occurrence likelihood of UM ECWs.
NASA Astrophysics Data System (ADS)
Schindewolf, Marcus; Kaiser, Andreas; Buchholtz, Arno; Schmidt, Jürgen
2017-04-01
Extreme rainfall events and resulting flash floods led to massive devastations in Germany during spring 2016. The study presented aims on the development of a early warning system, which allows the simulation and assessment of negative effects on infrastructure by radar-based heavy rainfall predictions, serving as input data for the process-based soil loss and deposition model EROSION 3D. Our approach enables a detailed identification of runoff and sediment fluxes in agricultural used landscapes. In a first step, documented historical events were analyzed concerning the accordance of measured radar rainfall and large scale erosion risk maps. A second step focused on a small scale erosion monitoring via UAV of source areas of heavy flooding events and a model reconstruction of the processes involved. In all examples damages were caused to local infrastructure. Both analyses are promising in order to detect runoff and sediment delivering areas even in a high temporal and spatial resolution. Results prove the important role of late-covering crops such as maize, sugar beet or potatoes in runoff generation. While e.g. winter wheat positively affects extensive runoff generation on undulating landscapes, massive soil loss and thus muddy flows are observed and depicted in model results. Future research aims on large scale model parameterization and application in real time, uncertainty estimation of precipitation forecast and interface developments.
Tack, Jason D; Fedy, Bradley C
2015-01-01
Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.
Cui, Peng; Zhong, Tingyan; Wang, Zhuo; Wang, Tao; Zhao, Hongyu; Liu, Chenglin; Lu, Hui
2018-06-01
Circadian genes express periodically in an approximate 24-h period and the identification and study of these genes can provide deep understanding of the circadian control which plays significant roles in human health. Although many circadian gene identification algorithms have been developed, large numbers of false positives and low coverage are still major problems in this field. In this study we constructed a novel computational framework for circadian gene identification using deep neural networks (DNN) - a deep learning algorithm which can represent the raw form of data patterns without imposing assumptions on the expression distribution. Firstly, we transformed time-course gene expression data into categorical-state data to denote the changing trend of gene expression. Two distinct expression patterns emerged after clustering of the state data for circadian genes from our manually created learning dataset. DNN was then applied to discriminate the aperiodic genes and the two subtypes of periodic genes. In order to assess the performance of DNN, four commonly used machine learning methods including k-nearest neighbors, logistic regression, naïve Bayes, and support vector machines were used for comparison. The results show that the DNN model achieves the best balanced precision and recall. Next, we conducted large scale circadian gene detection using the trained DNN model for the remaining transcription profiles. Comparing with JTK_CYCLE and a study performed by Möller-Levet et al. (doi: https://doi.org/10.1073/pnas.1217154110), we identified 1132 novel periodic genes. Through the functional analysis of these novel circadian genes, we found that the GTPase superfamily exhibits distinct circadian expression patterns and may provide a molecular switch of circadian control of the functioning of the immune system in human blood. Our study provides novel insights into both the circadian gene identification field and the study of complex circadian-driven biological control. This article is part of a Special Issue entitled: Accelerating Precision Medicine through Genetic and Genomic Big Data Analysis edited by Yudong Cai & Tao Huang. Copyright © 2017. Published by Elsevier B.V.
The physiology of keystroke dynamics
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Nguyen, Quang; Reynolds, Joseph; Horner, William; Szu, Harold
2011-06-01
A universal implementation for most behavioral Biometric systems is still unknown since some behaviors aren't individual enough for identification. Habitual behaviors which are measurable by sensors are considered 'soft' biometrics (i.e., walking style, typing rhythm), while physical attributes (i.e., iris, fingerprint) are 'hard' biometrics. Thus, biometrics can aid in the identification of a human not only in cyberspace but in the world we live in. Hard biometrics have proven to be a rather successful form of identification, despite a large amount of individual signatures to keep track of. Virtually all soft biometric strategies, however, share a common pitfall. Instead of the classical pass/fail decision based on the measurements used by hard biometrics, a confidence threshold is imposed, increasing False Alarm and False Rejection Rates. This unreliability is a major roadblock for large scale system integration. Common computer security requires users to log-in with a six or more digit PIN (Personal Identification Number) to access files on the disk. Commercially available Keystroke Dynamics (KD) software can separately calculate and keep track of the mean and variance for each time travelled between each key (air time), and the time spent pressing each key (touch time). Despite its apparent utility, KD is not yet a robust, fault-tolerant system. We begin with a simple question: how could a pianist quickly control so many different finger and wrist movements to play music? What information, if any, can be gained from analyzing typing behavior over time? Biology has shown us that the separation of arm and finger motion is due to 3 long nerves in each arm; regulating movement in different parts of the hand. In this paper we wish to capture the underlying behavioral information of a typist through statistical memory and non-linear dynamics. Our method may reveal an inverse Compressive Sensing mapping; a unique individual signature.
Identification of host response signatures of infection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branda, Steven S.; Sinha, Anupama; Bent, Zachary
2013-02-01
Biological weapons of mass destruction and emerging infectious diseases represent a serious and growing threat to our national security. Effective response to a bioattack or disease outbreak critically depends upon efficient and reliable distinguishing between infected vs healthy individuals, to enable rational use of scarce, invasive, and/or costly countermeasures (diagnostics, therapies, quarantine). Screening based on direct detection of the causative pathogen can be problematic, because culture- and probe-based assays are confounded by unanticipated pathogens (e.g., deeply diverged, engineered), and readily-accessible specimens (e.g., blood) often contain little or no pathogen, particularly at pre-symptomatic stages of disease. Thus, in addition to themore » pathogen itself, one would like to detect infection-specific host response signatures in the specimen, preferably ones comprised of nucleic acids (NA), which can be recovered and amplified from tiny specimens (e.g., fingerstick draws). Proof-of-concept studies have not been definitive, however, largely due to use of sub-optimal sample preparation and detection technologies. For purposes of pathogen detection, Sandia has developed novel molecular biology methods that enable selective isolation of NA unique to, or shared between, complex samples, followed by identification and quantitation via Second Generation Sequencing (SGS). The central hypothesis of the current study is that variations on this approach will support efficient identification and verification of NA-based host response signatures of infectious disease. To test this hypothesis, we re-engineered Sandia's sophisticated sample preparation pipelines, and developed new SGS data analysis tools and strategies, in order to pioneer use of SGS for identification of host NA correlating with infection. Proof-of-concept studies were carried out using specimens drawn from pathogen-infected non-human primates (NHP). This work provides a strong foundation for large-scale, highly-efficient efforts to identify and verify infection-specific host NA signatures in human populations.« less
Newly Uncovered Large-Scale Component of the Northern Jet in R Aqr
NASA Astrophysics Data System (ADS)
Montez, Rodolfo; Karovska, Margarita; Nichols, Joy S.; Kashyap, Vinay
2017-06-01
R Aqr is a symbiotic system comprised a compact white dwarf and Mira giant star. The interaction of these stars is responsible for the presence of a two-sided jet structure that is seen across the electromagnetic spectrum. X-ray emission from the jet was first discovered in 2000 with an observation by the Chandra X-ray Observatory. Since then follow-up observations have traced the evolution of the X-ray emission from the jet and a central compact source. In X-rays, the NE jet is brighter than the SW jet, but the full extent of the SW jet was larger - before it began fading below the detection threshold. However, we have uncovered evidence for large-scale emission associated with the NE jet that matches the extent of the SW jet. The emission has escaped previous identification because it is near the detection threshold, but it has been present since the first 2000 observation and clearly evolves in subsequent observations. We present our study of the emission from this component of the NE jet, its relationship to multiwavelength observations, and how it impacts our interpretation of the jet-phenomenon in R Aqr.
Maguire, E M; Bokhour, B G; Asch, S M; Wagner, T H; Gifford, A L; Gallagher, T H; Durfee, J M; Martinello, R A; Elwy, A R
2016-06-01
We examined print, broadcast and social media reports about health care systems' disclosures of large scale adverse events to develop future effective messaging. Directed content analysis. We systematically searched four communication databases, YouTube and Really Simple Syndication (RSS) feeds relating to six disclosures of lapses in infection control practices in the Department of Veterans Affairs occurring between 2009 and 2012. We assessed these with a coding frame derived from effective crisis and risk communication models. We identified 148 unique media reports. Some components of effective communication (discussion of cause, reassurance, self-efficacy) were more present than others (apology, lessons learned). Media about 'promoting secrecy' and 'slow response' appeared in reports when time from event discovery to patient notification was over 75 days. Elected officials' quotes (n = 115) were often negative (83%). Hospital officials' comments (n = 165) were predominantly neutral (92%), and focused on information sharing. Health care systems should work to ensure that they develop clear messages focused on what is not well covered by the media, including authentic apologies, remedial actions taken, and shorten the timeframe between event identification and disclosure to patients. Published by Elsevier Ltd.
Martín-Campos, Trinidad; Mylonas, Roman; Masselot, Alexandre; Waridel, Patrice; Petricevic, Tanja; Xenarios, Ioannis; Quadroni, Manfredo
2017-08-04
Mass spectrometry (MS) has become the tool of choice for the large scale identification and quantitation of proteins and their post-translational modifications (PTMs). This development has been enabled by powerful software packages for the automated analysis of MS data. While data on PTMs of thousands of proteins can nowadays be readily obtained, fully deciphering the complexity and combinatorics of modification patterns even on a single protein often remains challenging. Moreover, functional investigation of PTMs on a protein of interest requires validation of the localization and the accurate quantitation of its changes across several conditions, tasks that often still require human evaluation. Software tools for large scale analyses are highly efficient but are rarely conceived for interactive, in-depth exploration of data on individual proteins. We here describe MsViz, a web-based and interactive software tool that supports manual validation of PTMs and their relative quantitation in small- and medium-size experiments. The tool displays sequence coverage information, peptide-spectrum matches, tandem MS spectra and extracted ion chromatograms through a single, highly intuitive interface. We found that MsViz greatly facilitates manual data inspection to validate PTM location and quantitate modified species across multiple samples.
Hiller, Ekkehard; Istel, Fabian; Tscherner, Michael; Brunke, Sascha; Ames, Lauren; Firon, Arnaud; Green, Brian; Cabral, Vitor; Marcet-Houben, Marina; Jacobsen, Ilse D.; Quintin, Jessica; Seider, Katja; Frohner, Ingrid; Glaser, Walter; Jungwirth, Helmut; Bachellier-Bassi, Sophie; Chauvel, Murielle; Zeidler, Ute; Ferrandon, Dominique; Gabaldón, Toni; Hube, Bernhard; d'Enfert, Christophe; Rupp, Steffen; Cormack, Brendan; Haynes, Ken; Kuchler, Karl
2014-01-01
The opportunistic fungal pathogen Candida glabrata is a frequent cause of candidiasis, causing infections ranging from superficial to life-threatening disseminated disease. The inherent tolerance of C. glabrata to azole drugs makes this pathogen a serious clinical threat. To identify novel genes implicated in antifungal drug tolerance, we have constructed a large-scale C. glabrata deletion library consisting of 619 unique, individually bar-coded mutant strains, each lacking one specific gene, all together representing almost 12% of the genome. Functional analysis of this library in a series of phenotypic and fitness assays identified numerous genes required for growth of C. glabrata under normal or specific stress conditions, as well as a number of novel genes involved in tolerance to clinically important antifungal drugs such as azoles and echinocandins. We identified 38 deletion strains displaying strongly increased susceptibility to caspofungin, 28 of which encoding proteins that have not previously been linked to echinocandin tolerance. Our results demonstrate the potential of the C. glabrata mutant collection as a valuable resource in functional genomics studies of this important fungal pathogen of humans, and to facilitate the identification of putative novel antifungal drug target and virulence genes. PMID:24945925
NASA Astrophysics Data System (ADS)
Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.
2015-12-01
Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.
NASA Astrophysics Data System (ADS)
Dutta, Rohan; Ghosh, Parthasarathi; Chowdhury, Kanchan
2014-01-01
Large-scale helium refrigerators are subjected to pulsed heat load from tokamaks. As these plants are designed for constant heat loads, operation under such varying load may lead to instability in plants thereby tripping the operation of different equipment. To understand the behavior of the plant subjected to pulsed heat load, an existing plant of 120 W at 4.2 K and another large-scale plant of 18 kW at 4.2 K have been analyzed using a commercial process simulator Aspen Hysys®. A similar heat load characteristic has been applied in both quasi steady state and dynamic analysis to determine critical stages and equipment of these plants from operational point of view. It has been found that the coldest part of both the cycles consisting JT-stage and its preceding reverse Brayton stage are the most affected stages of the cycles. Further analysis of the above stages and constituting equipment revealed limits of operation with respect to variation of return stream flow rate resulted from such heat load variations. The observations on the outcome of the analysis can be used for devising techniques for steady operation of the plants subjected to pulsed heat load.
NASA Astrophysics Data System (ADS)
Do, Hong; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth; Senerivatne, Sonia
2017-04-01
In-situ observations of daily streamflow with global coverage are a crucial asset for understanding large-scale freshwater resources which are an essential component of the Earth system and a prerequisite for societal development. Here we present the Global Streamflow Indices and Metadata archive (G-SIM), a collection indices derived from more than 20,000 daily streamflow time series across the globe. These indices are designed to support global assessments of change in wet and dry extremes, and have been compiled from 12 free-to-access online databases (seven national databases and five international collections). The G-SIM archive also includes significant metadata to help support detailed understanding of streamflow dynamics, with the inclusion of drainage area shapefile and many essential catchment properties such as land cover type, soil and topographic characteristics. The automated procedure in data handling and quality control of the project makes G-SIM a reproducible, extendible archive and can be utilised for many purposes in large-scale hydrology. Some potential applications include the identification of observational trends in hydrological extremes, the assessment of climate change impacts on streamflow regimes, and the validation of global hydrological models.
Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.
2013-01-01
Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the United States. We review inferential methods for partially synthetic data, and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data, and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. PMID:23670983
Biometric identification: a holistic perspective
NASA Astrophysics Data System (ADS)
Nadel, Lawrence D.
2007-04-01
Significant advances continue to be made in biometric technology. However, the global war on terrorism and our increasingly electronic society have created the societal need for large-scale, interoperable biometric capabilities that challenge the capabilities of current off-the-shelf technology. At the same time, there are concerns that large-scale implementation of biometrics will infringe our civil liberties and offer increased opportunities for identity theft. This paper looks beyond the basic science and engineering of biometric sensors and fundamental matching algorithms and offers approaches for achieving greater performance and acceptability of applications enabled with currently available biometric technologies. The discussion focuses on three primary biometric system aspects: performance and scalability, interoperability, and cost benefit. Significant improvements in system performance and scalability can be achieved through careful consideration of the following elements: biometric data quality, human factors, operational environment, workflow, multibiometric fusion, and integrated performance modeling. Application interoperability hinges upon some of the factors noted above as well as adherence to interface, data, and performance standards. However, there are times when the price of conforming to such standards can be decreased local system performance. The development of biometric performance-based cost benefit models can help determine realistic requirements and acceptable designs.
NASA Astrophysics Data System (ADS)
Nishizawa, Atsushi; Namikawa, Toshiya; Taruya, Atsushi
2016-03-01
Gravitational waves (GWs) from compact binary stars at cosmological distances are promising and powerful cosmological probes, referred to as the GW standard sirens. With future GW detectors, we will be able to precisely measure source luminosity distances out to a redshift z 5. To extract cosmological information, previous studies using the GW standard sirens rely on source redshift information obtained through an extensive electromagnetic follow-up campaign. However, the redshift identification is typically time-consuming and rather challenging. Here we propose a novel method for cosmology with the GW standard sirens free from the redshift measurements. Utilizing the anisotropies of the number density and luminosity distances of compact binaries originated from the large-scale structure, we show that (i) this anisotropies can be measured even at very high-redshifts (z = 2), (ii) the expected constraints on the primordial non-Gaussianity with Einstein Telescope would be comparable to or even better than the other large-scale structure probes at the same epoch, (iii) the cross-correlation with other cosmological observations is found to have high-statistical significance. A.N. was supported by JSPS Postdoctoral Fellowships for Research Abroad No. 25-180.
Crees, Jennifer J; Carbone, Chris; Sommer, Robert S; Benecke, Norbert; Turvey, Samuel T
2016-03-30
The use of short-term indicators for understanding patterns and processes of biodiversity loss can mask longer-term faunal responses to human pressures. We use an extensive database of approximately 18,700 mammalian zooarchaeological records for the last 11,700 years across Europe to reconstruct spatio-temporal dynamics of Holocene range change for 15 large-bodied mammal species. European mammals experienced protracted, non-congruent range losses, with significant declines starting in some species approximately 3000 years ago and continuing to the present, and with the timing, duration and magnitude of declines varying individually between species. Some European mammals became globally extinct during the Holocene, whereas others experienced limited or no significant range change. These findings demonstrate the relatively early onset of prehistoric human impacts on postglacial biodiversity, and mirror species-specific patterns of mammalian extinction during the Late Pleistocene. Herbivores experienced significantly greater declines than carnivores, revealing an important historical extinction filter that informs our understanding of relative resilience and vulnerability to human pressures for different taxa. We highlight the importance of large-scale, long-term datasets for understanding complex protracted extinction processes, although the dynamic pattern of progressive faunal depletion of European mammal assemblages across the Holocene challenges easy identification of 'static' past baselines to inform current-day environmental management and restoration. © 2016 The Author(s).
The scale and nature of Viking settlement in Ireland from Y-chromosome admixture analysis.
McEvoy, Brian; Brady, Claire; Moore, Laoise T; Bradley, Daniel G
2006-12-01
The Vikings (or Norse) played a prominent role in Irish history but, despite this, their genetic legacy in Ireland, which may provide insights into the nature and scale of their immigration, is largely unexplored. Irish surnames, some of which are thought to have Norse roots, are paternally inherited in a similar manner to Y-chromosomes. The correspondence of Scandinavian patrilineal ancestry in a cohort of Irish men bearing surnames of putative Norse origin was examined using both slow mutating unique event polymorphisms and relatively rapidly changing short tandem repeat Y-chromosome markers. Irish and Scandinavian admixture proportions were explored for both systems using six different admixture estimators, allowing a parallel investigation of the impact of method and marker type in Y-chromosome admixture analysis. Admixture proportion estimates in the putative Norse surname group were highly consistent and detected little trace of Scandinavian ancestry. In addition, there is scant evidence of Scandinavian Y-chromosome introgression in a general Irish population sample. Although conclusions are largely dependent on the accurate identification of Norse surnames, the findings are consistent with a relatively small number of Norse settlers (and descendents) migrating to Ireland during the Viking period (ca. AD 800-1200) suggesting that Norse colonial settlements might have been largely composed of indigenous Irish. This observation adds to previous genetic studies that point to a flexible Viking settlement approach across North Atlantic Europe.
NASA Astrophysics Data System (ADS)
Romo, Cynthia Paulinne
High speed digital video images of encased and uncased large-scale explosions of Ammonium Nitrate Fuel Oil (ANFO), and Composition C-4 (C-4) at different masses were analyzed using the background oriented schlieren visualization technique. The encased explosions for ANFO and C-4 took place in the form of car bombs and pipe bombs respectively. The data obtained from the video footage were used to produce shock wave radius vs time profiles, as well as Mach number vs shock wave position profiles. The experimentally measured shock wave data for each explosive material were scaled using Sachs' scaling laws to a 1 kilogram charge at normal temperature and pressure. The results of C-4 were compared to literature, while the results of scaled ANFO were compared to each other, and to the results obtained during the uncased detonations. The comparison between the scaled profiles gathered from the encased and uncased detonations resulted in the identification of the relative amount of energy lost due to the fragmentation of the case. The C-4 profiles were compared to those obtained from computational simulations performed via CTH. The C-4 results showed an agreement in the data reported in literature and that obtained using the background-oriented schlieren (BOS) technique, as well as a good overall agreement with the profiles obtained computationally.
Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc
2016-04-01
In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.
Nanoscale detection of bacteriophage triggered ion cascade (Invited Paper)
NASA Astrophysics Data System (ADS)
Dobozi-King, Maria; Seo, Sungkyu; Kim, Jong U.; Cheng, Mosong; Kish, Laszlo B.; Young, Ryland
2005-05-01
In an era of potential bioterrorism and pandemics of antibiotic-resistant microbes, bacterial contaminations of food and water supplies is a major concern. There is an urgent need for the rapid, inexpensive and specific identification of bacteria under field conditions. Here we describe a method that combines the specificity and avidity of bacteriophages with fluctuation analysis of electrical noise. The method is based on the massive, transitory ion leakage that occurs at the moment of phage DNA injection into the host cell. The ion fluxes require only that the cells be physiologically viable (i.e., have energized membranes) and can occur within seconds after mixing the cells with sufficient concentrations of phage particles. To detect these fluxes, we have constructed a nano-well, a lateral, micron-size capacitor of titanium electrodes with gap size of 150 nm, and used it to measure the electrical field fluctuations in microliter (mm3) samples containing phage and bacteria. In mixtures where the analyte bacteria were sensitive to the phage, large stochastic waves with various time and amplitude scales were observed, with power spectra of approximately 1/f2 shape over at 1 - 10 Hz. Development of this SEPTIC (SEnsing of Phage-Triggered Ion Cascades) technology could provide rapid detection and identification of live, pathogenic bacteria on the scale of minutes, with unparalleled specificity. The method has a potential ultimate sensitivity of 1 bacterium/microliter (1 bacterium/mm3).
Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology.
Cock, Peter J A; Grüning, Björn A; Paszkiewicz, Konrad; Pritchard, Leighton
2013-01-01
The Galaxy Project offers the popular web browser-based platform Galaxy for running bioinformatics tools and constructing simple workflows. Here, we present a broad collection of additional Galaxy tools for large scale analysis of gene and protein sequences. The motivating research theme is the identification of specific genes of interest in a range of non-model organisms, and our central example is the identification and prediction of "effector" proteins produced by plant pathogens in order to manipulate their host plant. This functional annotation of a pathogen's predicted capacity for virulence is a key step in translating sequence data into potential applications in plant pathology. This collection includes novel tools, and widely-used third-party tools such as NCBI BLAST+ wrapped for use within Galaxy. Individual bioinformatics software tools are typically available separately as standalone packages, or in online browser-based form. The Galaxy framework enables the user to combine these and other tools to automate organism scale analyses as workflows, without demanding familiarity with command line tools and scripting. Workflows created using Galaxy can be saved and are reusable, so may be distributed within and between research groups, facilitating the construction of a set of standardised, reusable bioinformatic protocols. The Galaxy tools and workflows described in this manuscript are open source and freely available from the Galaxy Tool Shed (http://usegalaxy.org/toolshed or http://toolshed.g2.bx.psu.edu).
Identification of large geomorphological anomalies based on 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Doglioni, A.; Simeone, V.
2012-04-01
The identification and analysis based on quantitative evidences of large geomorphological anomalies is an important stage for the study of large landslides. Numerical geomorphic analyses represent an interesting approach to this kind of studies, allowing for a detailed and pretty accurate identification of hidden topographic anomalies that may be related to large landslides. Here a geomorphic numerical analyses of the Digital Terrain Model (DTM) is presented. The introduced approach is based on 2D discrete wavelet transform (Antoine et al., 2003; Bruun and Nilsen, 2003, Booth et al., 2009). The 2D wavelet decomposition of the DTM, and in particular the analysis of the detail coefficients of the wavelet transform can provide evidences of anomalies or singularities, i.e. discontinuities of the land surface. These discontinuities are not very evident from the DTM as it is, while 2D wavelet transform allows for grid-based analysis of DTM and for mapping the decomposition. In fact, the grid-based DTM can be assumed as a matrix, where a discrete wavelet transform (Daubechies, 1992) is performed columnwise and linewise, which basically represent horizontal and vertical directions. The outcomes of this analysis are low-frequency approximation coefficients and high-frequency detail coefficients. Detail coefficients are analyzed, since their variations are associated to discontinuities of the DTM. Detailed coefficients are estimated assuming to perform 2D wavelet transform both for the horizontal direction (east-west) and for the vertical direction (north-south). Detail coefficients are then mapped for both the cases, thus allowing to visualize and quantify potential anomalies of the land surface. Moreover, wavelet decomposition can be pushed to further levels, assuming a higher scale number of the transform. This may potentially return further interesting results, in terms of identification of the anomalies of land surface. In this kind of approach, the choice of a proper mother wavelet function is a tricky point, since it conditions the analysis and then their outcomes. Therefore multiple levels as well as multiple wavelet analyses are guessed. Here the introduced approach is applied to some interesting cases study of south Italy, in particular for the identification of large anomalies associated to large landslides at the transition between Apennine chain domain and the foredeep domain. In particular low Biferno valley and Fortore valley are here analyzed. Finally, the wavelet transforms are performed on multiple levels, thus trying to address the problem of which is the level extent for an accurate analysis fit to a specific problem. Antoine J.P., Carrette P., Murenzi R., and Piette B., (2003), Image analysis with two-dimensional continuous wavelet transform, Signal Processing, 31(3), pp. 241-272, doi:10.1016/0165-1684(93)90085-O. Booth A.M., Roering J.J., and Taylor Perron J., (2009), Automated landslide mapping using spectral analysis and high-resolution topographic data: Puget Sound lowlands, Washington, and Portland Hills, Oregon, Geomorphology, 109(3-4), pp. 132-147, doi:10.1016/j.geomorph.2009.02.027. Bruun B.T., and Nilsen S., (2003), Wavelet representation of large digital terrain models, Computers and Geoscience, 29(6), pp. 695-703, doi:10.1016/S0098-3004(03)00015-3. Daubechies, I. (1992), Ten lectures on wavelets, SIAM.
Williams, Bradley S; D'Amico, Ellen; Kastens, Jude H; Thorp, James H; Flotemersch, Joseph E; Thoms, Martin C
2013-09-01
River systems consist of hydrogeomorphic patches (HPs) that emerge at multiple spatiotemporal scales. Functional process zones (FPZs) are HPs that exist at the river valley scale and are important strata for framing whole-watershed research questions and management plans. Hierarchical classification procedures aid in HP identification by grouping sections of river based on their hydrogeomorphic character; however, collecting data required for such procedures with field-based methods is often impractical. We developed a set of GIS-based tools that facilitate rapid, low cost riverine landscape characterization and FPZ classification. Our tools, termed RESonate, consist of a custom toolbox designed for ESRI ArcGIS®. RESonate automatically extracts 13 hydrogeomorphic variables from readily available geospatial datasets and datasets derived from modeling procedures. An advanced 2D flood model, FLDPLN, designed for MATLAB® is used to determine valley morphology by systematically flooding river networks. When used in conjunction with other modeling procedures, RESonate and FLDPLN can assess the character of large river networks quickly and at very low costs. Here we describe tool and model functions in addition to their benefits, limitations, and applications.
NASA Astrophysics Data System (ADS)
Li, Cheng; Li, Junming; Li, Le
2018-02-01
Falling water evaporation cooling could efficiently suppress the containment operation pressure during the nuclear accident, by continually removing the core decay heat to the atmospheric environment. In order to identify the process of large-scale falling water evaporation cooling, the water flow characteristics of falling film, film rupture and falling rivulet were deduced, on the basis of previous correlation studies. The influences of the contact angle, water temperature and water flow rates on water converge along the flow direction were then numerically obtained and results were compared with the data for AP1000 and CAP1400 nuclear power plants. By comparisons, it is concluded that the water coverage fraction of falling water could be enhanced by either reducing the surface contact angle or increasing the water temperature. The falling water flow with evaporation for AP1000 containment was then calculated and the feature of its water coverage fraction was analyzed. Finally, based on the phenomena identification of falling water flow for AP1000 containment evaporation cooling, the scaling-down is performed and the dimensionless criteria were obtained.
21 CFR 880.2720 - Patient scale.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes devices... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Patient scale. 880.2720 Section 880.2720 Food and...
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
Debating Sex: Education Films and Sexual Morality for the Young in Post-War Germany, 1945-1955.
Winkler, Anita
2015-01-01
After 1945 rapidly climbing figures of venereal disease infections menaced the health of the war-ridden German population. Physicians sought to gain control over this epidemic and initiated large-scale sex education campaigns to inform people about identification, causes and treatment of VD and advised them on appropriate moral sexual behaviour as a prophylactic measure. Film played a crucial role in these campaigns. As mass medium it was believed film could reach out to large parts of society and quickly disseminate sexual knowledge and moral codes of conduct amongst the population. This essay discusses the transition of the initial central role of sex education films in the fight against venereal disease in the immediate post-war years towards a more critical stance as to the effects of cinematographic education of the young in an East and West German context.
Advances in the Biology and Chemistry of Sialic Acids
Chen, Xi; Varki, Ajit
2010-01-01
Sialic acids are a subset of nonulosonic acids, which are nine-carbon alpha-keto aldonic acids. Natural existing sialic acid-containing structures are presented in different sialic acid forms, various sialyl linkages, and on diverse underlying glycans. They play important roles in biological, pathological, and immunological processes. Sialobiology has been a challenging and yet attractive research area. Recent advances in chemical and chemoenzymatic synthesis as well as large-scale E. coli cell-based production have provided a large library of sialoside standards and derivatives in amounts sufficient for structure-activity relationship studies. Sialoglycan microarrays provide an efficient platform for quick identification of preferred ligands for sialic acid-binding proteins. Future research on sialic acid will continue to be at the interface of chemistry and biology. Research efforts will not only lead to a better understanding of the biological and pathological importance of sialic acids and their diversity, but could also lead to the development of therapeutics. PMID:20020717
Model verification of large structural systems. [space shuttle model response
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1978-01-01
A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.
Long-term effects of flipper bands on penguins
Gauthier-Clerc, M.; Gendner, J.-P.; Ribic, C.A.; Fraser, William R.; Woehler, Eric J.; Descamps, S.; Gilly, C.; Le, Bohec C.; Le, Maho Y.
2004-01-01
Changes in seabird populations, and particularly of penguins, offer a unique opportunity for investigating the impact of fisheries and climatic variations on marine resources. Such investigations often require large-scale banding to identify individual birds, but the significance of the data relies on the assumption that no bias is introduced in this type of long-term monitoring. After 5 years of using an automated system of identification of king penguins implanted with electronic tags (100 adult king penguins were implanted with a transponder tag, 50 of which were also flipper banded), we can report that banding results in later arrival at the colony for courtship in some years, lower breeding probability and lower chick production. We also found that the survival rate of unbanded, electronically tagged king penguin chicks after 2-3 years is approximately twice as large as that reported in the literature for banded chicks. ?? 2004 The Royal Society.
Debating Sex: Education Films and Sexual Morality for the Young in post-War Germany, 1945-55
Winkler, Anita
2015-01-01
Summary After 1945 rapidly climbing figures of venereal disease infections menaced the health of the war-ridden German population. Physicians sought to gain control over this epidemic and initiated large-scale sex education campaigns to inform people about identification, causes and treatment of VD and advised them on appropriate moral sexual behaviour as a prophylactic measure. Film played a crucial role in these campaigns. As mass medium it was believed film could reach out to large parts of society and quickly disseminate sexual knowledge and moral codes of conduct amongst the population. This essay discusses the transition of the initial central role of sex education films in the fight against venereal disease in the immediate post-war years towards a more critical stance as to the effects of cinematographic education of the young in an East and West German context. PMID:26403056
Benchmarking Atomic Data for Astrophysics: Be-like Ions between B II and Ne VII
NASA Astrophysics Data System (ADS)
Wang, Kai; Chen, Zhan Bin; Zhang, Chun Yu; Si, Ran; Jönsson, Per; Hartman, Henrik; Gu, Ming Feng; Chen, Chong Yang; Yan, Jun
2018-02-01
Large-scale self-consistent multiconfiguration Dirac–Hartree–Fock and relativistic configuration interaction calculations are reported for the n≤slant 6 levels in Be-like ions from B II to Ne VII. Effects from electron correlation are taken into account by means of large expansions in terms of a basis of configuration state functions, and a complete and accurate data set of excitation energies; lifetimes; wavelengths; electric dipole, magnetic dipole, electric quadrupole, and magnetic quadrupole line strengths; transition rates; and oscillator strengths for these levels is provided for each ion. Comparisons are made with available experimental and theoretical results. The uncertainty of excitation energies is assessed to be 0.01% on average, which makes it possible to find and rule out misidentifications and aid new line identifications involving high-lying levels in astrophysical spectra. The complete data set is also useful for modeling and diagnosing astrophysical plasmas.
Cytogenetic biodosimetry: what it is and how we do it.
Wong, K F; Siu, Lisa L P; Ainsbury, E; Moquet, J
2013-04-01
Dicentric assay is the international gold standard for cytogenetic biodosimetry after radiation exposure, despite being very labour-intensive, time-consuming, and highly expertise-dependent. It involves the identification of centromeres and structure of solid-stained chromosomes and the enumeration of dicentric chromosomes in a large number of first-division metaphases of cultured T lymphocytes. The dicentric yield is used to estimate the radiation exposure dosage according to a statistically derived and predetermined dose-response curve. It can be used for population triage after large-scale accidental over-exposure to ionising radiation or with a view to making clinical decisions for individual patients receiving substantial radiation. In this report, we describe our experience in the establishment of a cytogenetic biodosimetry laboratory in Queen Elizabeth Hospital, Hong Kong. This was part of the contingency plan for emergency measures against radiation accidents at nuclear power stations.
Advances in segmentation modeling for health communication and social marketing campaigns.
Albrecht, T L; Bryant, C
1996-01-01
Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.
Liu, Jun-Jun; Xiang, Yu
2011-01-01
WRKY transcription factors are key regulators of numerous biological processes in plant growth and development, as well as plant responses to abiotic and biotic stresses. Research on biological functions of plant WRKY genes has focused in the past on model plant species or species with largely characterized transcriptomes. However, a variety of non-model plants, such as forest conifers, are essential as feed, biofuel, and wood or for sustainable ecosystems. Identification of WRKY genes in these non-model plants is equally important for understanding the evolutionary and function-adaptive processes of this transcription factor family. Because of limited genomic information, the rarity of regulatory gene mRNAs in transcriptomes, and the sequence divergence to model organism genes, identification of transcription factors in non-model plants using methods similar to those generally used for model plants is difficult. This chapter describes a gene family discovery strategy for identification of WRKY transcription factors in conifers by a combination of in silico-based prediction and PCR-based experimental approaches. Compared to traditional cDNA library screening or EST sequencing at transcriptome scales, this integrated gene discovery strategy provides fast, simple, reliable, and specific methods to unveil the WRKY gene family at both genome and transcriptome levels in non-model plants.
Identifying the ichthyoplankton of a coral reef using DNA barcodes.
Hubert, Nicolas; Espiau, Benoit; Meyer, Christopher; Planes, Serge
2015-01-01
Marine fishes exhibit spectacular phenotypic changes during their ontogeny, and the identification of their early stages is challenging due to the paucity of diagnostic morphological characters at the species level. Meanwhile, the importance of early life stages in dispersal and connectivity has recently experienced an increasing interest in conservation programmes for coral reef fishes. This study aims at assessing the effectiveness of DNA barcoding for the automated identification of coral reef fish larvae through large-scale ecosystemic sampling. Fish larvae were mainly collected using bongo nets and light traps around Moorea between September 2008 and August 2010 in 10 sites distributed in open waters. Fish larvae ranged from 2 to 100 mm of total length, with the most abundant individuals being <5 mm. Among the 505 individuals DNA barcoded, 373 larvae (i.e. 75%) were identified to the species level. A total of 106 species were detected, among which 11 corresponded to pelagic and bathypelagic species, while 95 corresponded to species observed at the adult stage on neighbouring reefs. This study highlights the benefits and pitfalls of using standardized molecular systems for species identification and illustrates the new possibilities enabled by DNA barcoding for future work on coral reef fish larval ecology. © 2014 John Wiley & Sons Ltd.
TipMT: Identification of PCR-based taxon-specific markers.
Rodrigues-Luiz, Gabriela F; Cardoso, Mariana S; Valdivia, Hugo O; Ayala, Edward V; Gontijo, Célia M F; Rodrigues, Thiago de S; Fujiwara, Ricardo T; Lopes, Robson S; Bartholomeu, Daniella C
2017-02-11
Molecular genetic markers are one of the most informative and widely used genome features in clinical and environmental diagnostic studies. A polymerase chain reaction (PCR)-based molecular marker is very attractive because it is suitable to high throughput automation and confers high specificity. However, the design of taxon-specific primers may be difficult and time consuming due to the need to identify appropriate genomic regions for annealing primers and to evaluate primer specificity. Here, we report the development of a Tool for Identification of Primers for Multiple Taxa (TipMT), which is a web application to search and design primers for genotyping based on genomic data. The tool identifies and targets single sequence repeats (SSR) or orthologous/taxa-specific genes for genotyping using Multiplex PCR. This pipeline was applied to the genomes of four species of Leishmania (L. amazonensis, L. braziliensis, L. infantum and L. major) and validated by PCR using artificial genomic DNA mixtures of the Leishmania species as templates. This experimental validation demonstrates the reliability of TipMT because amplification profiles showed discrimination of genomic DNA samples from Leishmania species. The TipMT web tool allows for large-scale identification and design of taxon-specific primers and is freely available to the scientific community at http://200.131.37.155/tipMT/ .
Use of Sequenom Sample ID Plus® SNP Genotyping in Identification of FFPE Tumor Samples
Miller, Jessica K.; Buchner, Nicholas; Timms, Lee; Tam, Shirley; Luo, Xuemei; Brown, Andrew M. K.; Pasternack, Danielle; Bristow, Robert G.; Fraser, Michael; Boutros, Paul C.; McPherson, John D.
2014-01-01
Short tandem repeat (STR) analysis, such as the AmpFlSTR® Identifiler® Plus kit, is a standard, PCR-based human genotyping method used in the field of forensics. Misidentification of cell line and tissue DNA can be costly if not detected early; therefore it is necessary to have quality control measures such as STR profiling in place. A major issue in large-scale research studies involving archival formalin-fixed paraffin embedded (FFPE) tissues is that varying levels of DNA degradation can result in failure to correctly identify samples using STR genotyping. PCR amplification of STRs of several hundred base pairs is not always possible when DNA is degraded. The Sample ID Plus® panel from Sequenom allows for human DNA identification and authentication using SNP genotyping. In comparison to lengthy STR amplicons, this multiplexing PCR assay requires amplification of only 76–139 base pairs, and utilizes 47 SNPs to discriminate between individual samples. In this study, we evaluated both STR and SNP genotyping methods of sample identification, with a focus on paired FFPE tumor/normal DNA samples intended for next-generation sequencing (NGS). The ability to successfully validate the identity of FFPE samples can enable cost savings by reducing rework. PMID:24551080