Science.gov

Sample records for improved large-scale proteomics

  1. Analysis of tandem mass spectra by FTMS for improved large-scale proteomics with superior protein quantification

    PubMed Central

    McAlister, Graeme C.; Phanstiel, Doug; Wenger, Craig D.; Lee, M. Violet; Coon, Joshua J.

    2009-01-01

    Using a newly developed dual-cell quadrupole linear ion trap-orbitrap hybrid mass spectrometer (dcQLT-orbitrap), we demonstrate the utility of collecting high-resolution tandem mass spectral data for large-scale shotgun proteomics. Multiple nanoLC-MS/MS experiments on both an older generation quadrupole linear ion trap-orbitrap hybrid (QLT-orbitrap) and the dcQLT-orbitrap, using both resonant-excitation CAD and beam-type CAD (HCD) were performed. Resulting from various technological advances (e.g., a stacked ring ion guide AP inlet, a dual cell QLT, etc.), the dcQLT-orbitrap exhibited increased duty cycle (~1.5–2×) and sensitivity for both CAD (ion trap detection) and HCD (orbitrap detection) methods. As compared to the older system, the dcQLT-orbitrap produced significantly more unique peptide identification for both methods (~30% improvement for CAD and ~115% improvement for HCD). The sizeable improvement of the HCD method on the dcQLT-orbitrap system, outperforms the current standard method of CAD with ion trap detection for large-scale analysis. Finally, we demonstrate that the increased HCD performance translates to a direct and substantial improvement in protein quantitation precision using isobaric tags. PMID:19938823

  2. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  3. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-01

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu . PMID:27086506

  4. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-01

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  5. Trans-Proteomic Pipeline, a standardized data processing pipeline for large-scale reproducible proteomics informatics

    PubMed Central

    Deutsch, Eric W.; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L.

    2015-01-01

    Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include mass spectrometry to define protein sequence, protein:protein interactions, and protein post-translational modifications. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative mass spectrometry proteomics. It supports all major operating systems and instrument vendors via open data formats. Here we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of tandem mass spectrometry datasets, as well as some major upcoming features. PMID:25631240

  6. Gas-phase purification enables accurate, large-scale, multiplexed proteome quantification with isobaric tagging

    PubMed Central

    Wenger, Craig D; Lee, M Violet; Hebert, Alexander S; McAlister, Graeme C; Phanstiel, Douglas H; Westphall, Michael S; Coon, Joshua J

    2011-01-01

    We describe a mass spectrometry method, QuantMode, which improves the accuracy of isobaric tag–based quantification by alleviating the pervasive problem of precursor interference—co-isolation of impurities—through gas-phase purification. QuantMode analysis of a yeast sample ‘contaminated’ with interfering human peptides showed substantially improved quantitative accuracy compared to a standard scan, with a small loss of spectral identifications. This technique will allow large-scale, multiplexed quantitative proteomics analyses using isobaric tagging. PMID:21963608

  7. Large-scale proteomic analysis of membrane proteins.

    PubMed

    Ahram, Mamoun; Springer, David L

    2004-10-01

    Proteomic analysis of membrane proteins is a promising approach for the identification of novel drug targets and/or disease biomarkers. Despite notable technological developments, obstacles related to extraction and solublization of membrane proteins are encountered. A critical discussion of the different preparative methods of membrane proteins is offered in relation to downstream proteomic applications, mainly gel-based analyses and mass spectrometry. Frequently, unknown proteins are identified by high-throughput profiling of membrane proteins. In search for novel membrane proteins, analysis of protein sequences using computational tools is performed to predict the presence of transmembrane domains. This review also presents these bioinformatic tools with the human proteome as a case study. Along with technological innovations, advancements in the areas of sample preparation and computational prediction of membrane proteins will lead to exciting discoveries.

  8. The Revolution and Evolution of Shotgun Proteomics for Large-Scale Proteome Analysis

    PubMed Central

    Yates, John R.

    2013-01-01

    Mass spectrometry has evolved at an exponential rate over the last 100 years. Innovations in the development of mass spectrometers have created powerful instruments capable of analyzing a wide range of targets, from rare atoms and molecules to very large molecules such as a proteins, protein complexes and DNA. These performance gains have been driven by sustaining innovations, punctuated by the occasional disruptive innovation. The use of mass spectrometry for proteome analysis was driven by disruptive innovations that created a capability for large-scale analysis of proteins and modifications. PMID:23294060

  9. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  10. Application of de Novo Sequencing to Large-Scale Complex Proteomics Data Sets.

    PubMed

    Devabhaktuni, Arun; Elias, Joshua E

    2016-03-01

    Dependent on concise, predefined protein sequence databases, traditional search algorithms perform poorly when analyzing mass spectra derived from wholly uncharacterized protein products. Conversely, de novo peptide sequencing algorithms can interpret mass spectra without relying on reference databases. However, such algorithms have been difficult to apply to complex protein mixtures, in part due to a lack of methods for automatically validating de novo sequencing results. Here, we present novel metrics for benchmarking de novo sequencing algorithm performance on large-scale proteomics data sets and present a method for accurately calibrating false discovery rates on de novo results. We also present a novel algorithm (LADS) that leverages experimentally disambiguated fragmentation spectra to boost sequencing accuracy and sensitivity. LADS improves sequencing accuracy on longer peptides relative to that of other algorithms and improves discriminability of correct and incorrect sequences. Using these advancements, we demonstrate accurate de novo identification of peptide sequences not identifiable using database search-based approaches. PMID:26743026

  11. Evaluating the use of HILIC in large-scale, multi dimensional proteomics: Horses for courses?

    PubMed Central

    Bensaddek, Dalila; Nicolas, Armel; Lamond, Angus I.

    2015-01-01

    Despite many recent advances in instrumentation, the sheer complexity of biological samples remains a major challenge in large-scale proteomics experiments, reflecting both the large number of protein isoforms and the wide dynamic range of their expression levels. However, while the dynamic range of expression levels for different components of the proteome is estimated to be ∼107–8, the equivalent dynamic range of LC–MS is currently limited to ∼106. Sample pre-fractionation has therefore become routinely used in large-scale proteomics to reduce sample complexity during MS analysis and thus alleviate the problem of ion suppression and undersampling. There is currently a wide range of chromatographic techniques that can be applied as a first dimension separation. Here, we systematically evaluated the use of hydrophilic interaction liquid chromatography (HILIC), in comparison with hSAX, as a first dimension for peptide fractionation in a bottom-up proteomics workflow. The data indicate that in addition to its role as a useful pre-enrichment method for PTM analysis, HILIC can provide a robust, orthogonal and high-resolution method for increasing the depth of proteome coverage in large-scale proteomics experiments. The data also indicate that the choice of using either HILIC, hSAX, or other methods, is best made taking into account the specific types of biological analyses being performed. PMID:26869852

  12. PROTEOME-3D: an interactive bioinformatics tool for large-scale data exploration and knowledge discovery.

    PubMed

    Lundgren, Deborah H; Eng, Jimmy; Wright, Michael E; Han, David K

    2003-11-01

    Comprehensive understanding of biological systems requires efficient and systematic assimilation of high-throughput datasets in the context of the existing knowledge base. A major limitation in the field of proteomics is the lack of an appropriate software platform that can synthesize a large number of experimental datasets in the context of the existing knowledge base. Here, we describe a software platform, termed PROTEOME-3D, that utilizes three essential features for systematic analysis of proteomics data: creation of a scalable, queryable, customized database for identified proteins from published literature; graphical tools for displaying proteome landscapes and trends from multiple large-scale experiments; and interactive data analysis that facilitates identification of crucial networks and pathways. Thus, PROTEOME-3D offers a standardized platform to analyze high-throughput experimental datasets for the identification of crucial players in co-regulated pathways and cellular processes. PMID:12960178

  13. PROTEOME-3D: An Interactive Bioinformatics Tool for Large-Scale Data Exploration and Knowledge Discovery*

    PubMed Central

    Lundgren, Deborah H.; Eng, Jimmy; Wright, Michael E.; Han, David K.

    2006-01-01

    Comprehensive understanding of biological systems requires efficient and systematic assimilation of high-throughput datasets in the context of the existing knowledge base. A major limitation in the field of proteomics is the lack of an appropriate software platform that can synthesize a large number of experimental datasets in the context of the existing knowledge base. Here, we describe a software platform, termed PROTEOME-3D, that utilizes three essential features for systematic analysis of proteomics data: creation of a scalable, queryable, customized database for identified proteins from published literature; graphical tools for displaying proteome landscapes and trends from multiple large-scale experiments; and interactive data analysis that facilitates identification of crucial networks and pathways. Thus, PROTEOME-3D offers a standardized platform to analyze high-throughput experimental datasets for the identification of crucial players in co-regulated pathways and cellular processes. PMID:12960178

  14. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  15. Expediting SRM assay development for large-scale targeted proteomics experiments

    DOE PAGESBeta

    Wu, Chaochao; Shi, Tujin; Brown, Joseph N.; He, Jintang; Gao, Yuqian; Fillmore, Thomas L.; Shukla, Anil K.; Moore, Ronald J.; Camp, David G.; Rodland, Karin D.; et al

    2014-08-22

    Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection ofmore » top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.« less

  16. Expediting SRM assay development for large-scale targeted proteomics experiments

    SciTech Connect

    Wu, Chaochao; Shi, Tujin; Brown, Joseph N.; He, Jintang; Gao, Yuqian; Fillmore, Thomas L.; Shukla, Anil K.; Moore, Ronald J.; Camp, David G.; Rodland, Karin D.; Qian, Weijun; Liu, Tao; Smith, Richard D.

    2014-08-22

    Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection of top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.

  17. Determination of burn patient outcome by large-scale quantitative discovery proteomics

    PubMed Central

    Finnerty, Celeste C.; Jeschke, Marc G.; Qian, Wei-Jun; Kaushal, Amit; Xiao, Wenzhong; Liu, Tao; Gritsenko, Marina A.; Moore, Ronald J.; Camp, David G.; Moldawer, Lyle L.; Elson, Constance; Schoenfeld, David; Gamelli, Richard; Gibran, Nicole; Klein, Matthew; Arnoldo, Brett; Remick, Daniel; Smith, Richard D.; Davis, Ronald; Tompkins, Ronald G.; Herndon, David N.

    2013-01-01

    Objective Emerging proteomics techniques can be used to establish proteomic outcome signatures and to identify candidate biomarkers for survival following traumatic injury. We applied high-resolution liquid chromatography-mass spectrometry (LC-MS) and multiplex cytokine analysis to profile the plasma proteome of survivors and non-survivors of massive burn injury to determine the proteomic survival signature following a major burn injury. Design Proteomic discovery study. Setting Five burn hospitals across the U.S. Patients Thirty-two burn patients (16 non-survivors and 16 survivors), 19–89 years of age, were admitted within 96 h of injury to the participating hospitals with burns covering >20% of the total body surface area and required at least one surgical intervention. Interventions None. Measurements and Main Results We found differences in circulating levels of 43 proteins involved in the acute phase response, hepatic signaling, the complement cascade, inflammation, and insulin resistance. Thirty-two of the proteins identified were not previously known to play a role in the response to burn. IL-4, IL-8, GM-CSF, MCP-1, and β2-microglobulin correlated well with survival and may serve as clinical biomarkers. Conclusions These results demonstrate the utility of these techniques for establishing proteomic survival signatures and for use as a discovery tool to identify candidate biomarkers for survival. This is the first clinical application of a high-throughput, large-scale LC-MS-based quantitative plasma proteomic approach for biomarker discovery for the prediction of patient outcome following burn, trauma or critical illness. PMID:23507713

  18. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  19. Large-Scale Label-Free Quantitative Proteomics of the Pea aphid-Buchnera Symbiosis*

    PubMed Central

    Poliakov, Anton; Russell, Calum W.; Ponnala, Lalit; Hoops, Harold J.; Sun, Qi; Douglas, Angela E.; van Wijk, Klaas J.

    2011-01-01

    Many insects are nutritionally dependent on symbiotic microorganisms that have tiny genomes and are housed in specialized host cells called bacteriocytes. The obligate symbiosis between the pea aphid Acyrthosiphon pisum and the γ-proteobacterium Buchnera aphidicola (only 584 predicted proteins) is particularly amenable for molecular analysis because the genomes of both partners have been sequenced. To better define the symbiotic relationship between this aphid and Buchnera, we used large-scale, high accuracy tandem mass spectrometry (nanoLC-LTQ-Orbtrap) to identify aphid and Buchnera proteins in the whole aphid body, purified bacteriocytes, isolated Buchnera cells and the residual bacteriocyte fraction. More than 1900 aphid and 400 Buchnera proteins were identified. All enzymes in amino acid metabolism annotated in the Buchnera genome were detected, reflecting the high (68%) coverage of the proteome and supporting the core function of Buchnera in the aphid symbiosis. Transporters mediating the transport of predicted metabolites were present in the bacteriocyte. Label-free spectral counting combined with hierarchical clustering, allowed to define the quantitative distribution of a subset of these proteins across both symbiotic partners, yielding no evidence for the selective transfer of protein among the partners in either direction. This is the first quantitative proteome analysis of bacteriocyte symbiosis, providing a wealth of information about molecular function of both the host cell and bacterial symbiont. PMID:21421797

  20. Large scale systematic proteomic quantification from non-metastatic to metastatic colorectal cancer

    NASA Astrophysics Data System (ADS)

    Yin, Xuefei; Zhang, Yang; Guo, Shaowen; Jin, Hong; Wang, Wenhai; Yang, Pengyuan

    2015-07-01

    A systematic proteomic quantification of formalin-fixed, paraffin-embedded (FFPE) colorectal cancer tissues from stage I to stage IIIC was performed in large scale. 1017 proteins were identified with 338 proteins in quantitative changes by label free method, while 341 proteins were quantified with significant expression changes among 6294 proteins by iTRAQ method. We found that proteins related to migration expression increased and those for binding and adherent decreased during the colorectal cancer development according to the gene ontology (GO) annotation and ingenuity pathway analysis (IPA). The integrin alpha 5 (ITA5) in integrin family was focused, which was consistent with the metastasis related pathway. The expression level of ITA5 decreased in metastasis tissues and the result has been further verified by Western blotting. Another two cell migration related proteins vitronectin (VTN) and actin-related protein (ARP3) were also proved to be up-regulated by both mass spectrometry (MS) based quantification results and Western blotting. Up to now, our result shows one of the largest dataset in colorectal cancer proteomics research. Our strategy reveals a disease driven omics-pattern for the metastasis colorectal cancer.

  1. Large scale multiplex PCR improves pathogen detection by DNA microarrays

    PubMed Central

    2009-01-01

    Background Medium density DNA microchips that carry a collection of probes for a broad spectrum of pathogens, have the potential to be powerful tools for simultaneous species identification, detection of virulence factors and antimicrobial resistance determinants. However, their widespread use in microbiological diagnostics is limited by the problem of low pathogen numbers in clinical specimens revealing relatively low amounts of pathogen DNA. Results To increase the detection power of a fluorescence-based prototype-microarray designed to identify pathogenic microorganisms involved in sepsis, we propose a large scale multiplex PCR (LSplex PCR) for amplification of several dozens of gene-segments of 9 pathogenic species. This protocol employs a large set of primer pairs, potentially able to amplify 800 different gene segments that correspond to the capture probes spotted on the microarray. The LSplex protocol is shown to selectively amplify only the gene segments corresponding to the specific pathogen present in the analyte. Application of LSplex increases the microarray detection of target templates by a factor of 100 to 1000. Conclusion Our data provide a proof of principle for the improvement of detection of pathogen DNA by microarray hybridization by using LSplex PCR. PMID:19121223

  2. Large-Scale Proteome Comparative Analysis of Developing Rhizomes of the Ancient Vascular Plant Equisetum Hyemale

    PubMed Central

    Balbuena, Tiago Santana; He, Ruifeng; Salvato, Fernanda; Gang, David R.; Thelen, Jay J.

    2012-01-01

    Horsetail (Equisetum hyemale) is a widespread vascular plant species, whose reproduction is mainly dependent on the growth and development of the rhizomes. Due to its key evolutionary position, the identification of factors that could be involved in the existence of the rhizomatous trait may contribute to a better understanding of the role of this underground organ for the successful propagation of this and other plant species. In the present work, we characterized the proteome of E. hyemale rhizomes using a GeLC-MS spectral-counting proteomics strategy. A total of 1,911 and 1,860 non-redundant proteins were identified in the rhizomes apical tip and elongation zone, respectively. Rhizome-characteristic proteins were determined by comparisons of the developing rhizome tissues to developing roots. A total of 87 proteins were found to be up-regulated in both horsetail rhizome tissues in relation to developing roots. Hierarchical clustering indicated a vast dynamic range in the regulation of the 87 characteristic proteins and revealed, based on the regulation profile, the existence of nine major protein groups. Gene ontology analyses suggested an over-representation of the terms involved in macromolecular and protein biosynthetic processes, gene expression, and nucleotide and protein binding functions. Spatial difference analysis between the rhizome apical tip and the elongation zone revealed that only eight proteins were up-regulated in the apical tip including RNA-binding proteins and an acyl carrier protein, as well as a KH domain protein and a T-complex subunit; while only seven proteins were up-regulated in the elongation zone including phosphomannomutase, galactomannan galactosyltransferase, endoglucanase 10 and 25, and mannose-1-phosphate guanyltransferase subunits alpha and beta. This is the first large-scale characterization of the proteome of a plant rhizome. Implications of the findings were discussed in relation to other underground organs and related

  3. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  4. Large-scale metabolome analysis and quantitative integration with genomics and proteomics data in Mycoplasma pneumoniae.

    PubMed

    Maier, Tobias; Marcos, Josep; Wodke, Judith A H; Paetzold, Bernhard; Liebeke, Manuel; Gutiérrez-Gallego, Ricardo; Serrano, Luis

    2013-07-01

    Systems metabolomics, the identification and quantification of cellular metabolites and their integration with genomics and proteomics data, promises valuable functional insights into cellular biology. However, technical constraints, sample complexity issues and the lack of suitable complementary quantitative data sets prevented accomplishing such studies in the past. Here, we present an integrative metabolomics study of the genome-reduced bacterium Mycoplasma pneumoniae. We experimentally analysed its metabolome using a cross-platform approach. We explain intracellular metabolite homeostasis by quantitatively integrating our results with the cellular inventory of proteins, DNA and other macromolecules, as well as with available building blocks from the growth medium. We calculated in vivo catalytic parameters of glycolytic enzymes, making use of measured reaction velocities, as well as enzyme and metabolite pool sizes. A quantitative, inter-species comparison of absolute and relative metabolite abundances indicated that metabolic pathways are regulated as functional units, thereby simplifying adaptive responses. Our analysis demonstrates the potential for new scientific insight by integrating different types of large-scale experimental data from a single biological source.

  5. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS.

    PubMed

    Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D

    2016-01-01

    Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.

  6. Colloquium on Large Scale Improvement: Implications for AISI

    ERIC Educational Resources Information Center

    McEwen, Nelly, Ed.

    2008-01-01

    The Alberta Initiative for School Improvement (AISI) is a province-wide partnership program whose goal is to improve student learning and performance by fostering initiatives that reflect the unique needs and circumstances of each school authority. It is currently ending its third cycle and ninth year of implementation. "The Colloquium on Large…

  7. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS.

    PubMed

    Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D

    2016-01-01

    Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts. PMID:26867748

  8. Large-scale and high-confidence proteomic analysis of human seminal plasma

    PubMed Central

    Pilch, Bartosz; Mann, Matthias

    2006-01-01

    Background The development of mass spectrometric (MS) techniques now allows the investigation of very complex protein mixtures ranging from subcellular structures to tissues. Body fluids are also popular targets of proteomic analysis because of their potential for biomarker discovery. Seminal plasma has not yet received much attention from the proteomics community but its characterization could provide a future reference for virtually all studies involving human sperm. The fluid is essential for the survival of spermatozoa and their successful journey through the female reproductive tract. Results Here we report the high-confidence identification of 923 proteins in seminal fluid from a single individual. Fourier transform MS enabled parts per million mass accuracy, and two consecutive stages of MS fragmentation allowed confident identification of proteins even by single peptides. Analysis with GoMiner annotated two-thirds of the seminal fluid proteome and revealed a large number of extracellular proteins including many proteases. Other proteins originated from male accessory glands and have important roles in spermatozoan survival. Conclusion This high-confidence characterization of seminal plasma content provides an inventory of proteins with potential roles in fertilization. When combined with quantitative proteomics methodologies, it should be useful for studies of fertilization, male infertility, and prostatic and testicular cancers. PMID:16709260

  9. From Peptidome to PRIDE: public proteomics data migration at a large scale.

    PubMed

    Csordas, Attila; Wang, Rui; Ríos, Daniel; Reisinger, Florian; Foster, Joseph M; Slotta, Douglas J; Vizcaíno, Juan Antonio; Hermjakob, Henning

    2013-05-01

    The PRIDE database, developed and maintained at the European Bioinformatics Institute (EBI), is one of the most prominent data repositories dedicated to high throughput MS-based proteomics data. Peptidome, developed by the National Center for Biotechnology Information (NCBI) as a sibling resource to PRIDE, was discontinued due to funding constraints in April 2011. A joint effort between the two teams was started soon after the Peptidome closure to ensure that data were not "lost" to the wider proteomics community by exporting it to PRIDE. As a result, data in the low terabyte range have been migrated from Peptidome to PRIDE and made publicly available under experiment accessions 17 900-18 271, representing 54 projects, ~53 million mass spectra, ~10 million peptide identifications, ~650,000 protein identifications, ~1.1 million biologically relevant protein modifications, and 28 species, from more than 30 different labs.

  10. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-01

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem. PMID:26709623

  11. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-01

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  12. DecoyPyrat: Fast Non-redundant Hybrid Decoy Sequence Generation for Large Scale Proteomics

    PubMed Central

    Wright, James C; Choudhary, Jyoti S

    2016-01-01

    Accurate statistical evaluation of sequence database peptide identifications from tandem mass spectra is essential in mass spectrometry based proteomics experiments. These statistics are dependent on accurately modelling random identifications. The target-decoy approach has risen to become the de facto approach to calculating FDR in proteomic datasets. The main principle of this approach is to search a set of decoy protein sequences that emulate the size and composition of the target protein sequences searched whilst not matching real proteins in the sample. To do this, it is commonplace to reverse or shuffle the proteins and peptides in the target database. However, these approaches have their drawbacks and limitations. A key confounding issue is the peptide redundancy between target and decoy databases leading to inaccurate FDR estimation. This inaccuracy is further amplified at the protein level and when searching large sequence databases such as those used for proteogenomics. Here, we present a unifying hybrid method to quickly and efficiently generate decoy sequences with minimal overlap between target and decoy peptides. We show that applying a reversed decoy approach can produce up to 5% peptide redundancy and many more additional peptides will have the exact same precursor mass as a target peptide. Our hybrid method addresses both these issues by first switching proteolytic cleavage sites with preceding amino acid, reversing the database and then shuffling any redundant sequences. This flexible hybrid method reduces the peptide overlap between target and decoy peptides to about 1% of peptides, making a more robust decoy model suitable for large search spaces. We also demonstrate the anti-conservative effect of redundant peptides on the calculation of q-values in mouse brain tissue data. PMID:27418748

  13. Evaluation of Large Scale Quantitative Proteomic Assay Development Using Peptide Affinity-based Mass Spectrometry*

    PubMed Central

    Whiteaker, Jeffrey R.; Zhao, Lei; Abbatiello, Susan E.; Burgess, Michael; Kuhn, Eric; Lin, ChenWei; Pope, Matthew E.; Razavi, Morteza; Anderson, N. Leigh; Pearson, Terry W.; Carr, Steven A.; Paulovich, Amanda G.

    2011-01-01

    Stable isotope standards and capture by antipeptide antibodies (SISCAPA) couples affinity enrichment of peptides with stable isotope dilution and detection by multiple reaction monitoring mass spectrometry to provide quantitative measurement of peptides as surrogates for their respective proteins. In this report, we describe a feasibility study to determine the success rate for production of suitable antibodies for SISCAPA assays in order to inform strategies for large-scale assay development. A workflow was designed that included a multiplex immunization strategy in which up to five proteotypic peptides from a single protein target were used to immunize individual rabbits. A total of 403 proteotypic tryptic peptides representing 89 protein targets were used as immunogens. Antipeptide antibody titers were measured by ELISA and 220 antipeptide antibodies representing 89 proteins were chosen for affinity purification. These antibodies were characterized with respect to their performance in SISCAPA-multiple reaction monitoring assays using trypsin-digested human plasma matrix. More than half of the assays generated were capable of detecting the target peptide at concentrations of less than 0.5 fmol/μl in human plasma, corresponding to protein concentrations of less than 100 ng/ml. The strategy of multiplexing five peptide immunogens was successful in generating a working assay for 100% of the targeted proteins in this evaluation study. These results indicate it is feasible for a single laboratory to develop hundreds of assays per year and allow planning for cost-effective generation of SISCAPA assays. PMID:21245105

  14. Large-Scale High School Reform through School Improvement Networks: Exploring Possibilities for "Developmental Evaluation"

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.

    2016-01-01

    Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…

  15. Review of Software Tools for Design and Analysis of Large scale MRM Proteomic Datasets

    PubMed Central

    Colangelo, Christopher M.; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi

    2013-01-01

    Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow. PMID:23702368

  16. Psychology in an Interdisciplinary Setting: A Large-Scale Project to Improve University Teaching

    ERIC Educational Resources Information Center

    Koch, Franziska D.; Vogt, Joachim

    2015-01-01

    At a German university of technology, a large-scale project was funded as a part of the "Quality Pact for Teaching", a programme launched by the German Federal Ministry of Education and Research to improve the quality of university teaching and study conditions. The project aims at intensifying interdisciplinary networking in teaching,…

  17. Infrastructure for Large-Scale Quality-Improvement Projects: Early Lessons from North Carolina Improving Performance in Practice

    ERIC Educational Resources Information Center

    Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…

  18. RichMind: A Tool for Improved Inference from Large-Scale Neuroimaging Results

    PubMed Central

    Maron-Katz, Adi; Amar, David; Simon, Eti Ben; Hendler, Talma; Shamir, Ron

    2016-01-01

    As the use of large-scale data-driven analysis becomes increasingly common, the need for robust methods for interpreting a large number of results increases. To date, neuroimaging attempts to interpret large-scale activity or connectivity results often turn to existing neural mapping based on previous literature. In case of a large number of results, manual selection or percent of overlap with existing maps is frequently used to facilitate interpretation, often without a clear statistical justification. Such methodology holds the risk of reporting false positive results and overlooking additional results. Here, we propose using enrichment analysis for improving the interpretation of large-scale neuroimaging results. We focus on two possible cases: position group analysis, where the identified results are a set of neural positions; and connection group analysis, where the identified results are a set of neural position-pairs (i.e. neural connections). We explore different models for detecting significant overrepresentation of known functional brain annotations using simulated and real data. We implemented our methods in a tool called RichMind, which provides both statistical significance reports and brain visualization. We demonstrate the abilities of RichMind by revisiting two previous fMRI studies. In both studies RichMind automatically highlighted most of the findings that were reported in the original studies as well as several additional findings that were overlooked. Hence, RichMind is a valuable new tool for rigorous inference from neuroimaging results. PMID:27455041

  19. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models

    NASA Astrophysics Data System (ADS)

    von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan

    2015-04-01

    Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical

  20. Large-scale proteomic analysis of the grapevine leaf apoplastic fluid reveals mainly stress-related proteins and cell wall modifying enzymes

    PubMed Central

    2013-01-01

    Background The extracellular space or apoplast forms a path through the whole plant and acts as an interface with the environment. The apoplast is composed of plant cell wall and space within which apoplastic fluid provides a means of delivering molecules and facilitates intercellular communications. However, the apoplastic fluid extraction from in planta systems remains challenging and this is particularly true for grapevine (Vitis vinifera L.), a worldwide-cultivated fruit plant. Large-scale proteomic analysis reveals the protein content of the grapevine leaf apoplastic fluid and the free interactive proteome map considerably facilitates the study of the grapevine proteome. Results To obtain a snapshot of the grapevine apoplastic fluid proteome, a vacuum-infiltration-centrifugation method was optimized to collect the apoplastic fluid from non-challenged grapevine leaves. Soluble apoplastic protein patterns were then compared to whole leaf soluble protein profiles by 2D-PAGE analyses. Subsequent MALDI-TOF/TOF mass spectrometry of tryptically digested protein spots was used to identify proteins. This large-scale proteomic analysis established a well-defined proteomic map of whole leaf and leaf apoplastic soluble proteins, with 223 and 177 analyzed spots, respectively. All data arising from proteomic, MS and MS/MS analyses were deposited in the public database world-2DPAGE. Prediction tools revealed a high proportion of (i) classical secreted proteins but also of non-classical secreted proteins namely Leaderless Secreted Proteins (LSPs) in the apoplastic protein content and (ii) proteins potentially involved in stress reactions and/or in cell wall metabolism. Conclusions This approach provides free online interactive reference maps annotating a large number of soluble proteins of the whole leaf and the apoplastic fluid of grapevine leaf. To our knowledge, this is the first detailed proteome study of grapevine apoplastic fluid providing a comprehensive overview of

  1. Enablers and Barriers to Large-Scale Uptake of Improved Solid Fuel Stoves: A Systematic Review

    PubMed Central

    Puzzolo, Elisa; Stanistreet, Debbi; Pope, Daniel; Bruce, Nigel G.

    2013-01-01

    Background: Globally, 2.8 billion people rely on household solid fuels. Reducing the resulting adverse health, environmental, and development consequences will involve transitioning through a mix of clean fuels and improved solid fuel stoves (IS) of demonstrable effectiveness. To date, achieving uptake of IS has presented significant challenges. Objectives: We performed a systematic review of factors that enable or limit large-scale uptake of IS in low- and middle-income countries. Methods: We conducted systematic searches through multidisciplinary databases, specialist websites, and consulting experts. The review drew on qualitative, quantitative, and case studies and used standardized methods for screening, data extraction, critical appraisal, and synthesis. We summarized our findings as “factors” relating to one of seven domains—fuel and technology characteristics; household and setting characteristics; knowledge and perceptions; finance, tax, and subsidy aspects; market development; regulation, legislation, and standards; programmatic and policy mechanisms—and also recorded issues that impacted equity. Results: We identified 31 factors influencing uptake from 57 studies conducted in Asia, Africa, and Latin America. All domains matter. Although factors such as offering technologies that meet household needs and save fuel, user training and support, effective financing, and facilitative government action appear to be critical, none guarantee success: All factors can be influential, depending on context. The nature of available evidence did not permit further prioritization. Conclusions: Achieving adoption and sustained use of IS at a large scale requires that all factors, spanning household/community and program/societal levels, be assessed and supported by policy. We propose a planning tool that would aid this process and suggest further research to incorporate an evaluation of effectiveness. Citation: Rehfuess EA, Puzzolo E, Stanistreet D, Pope D, Bruce

  2. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    SciTech Connect

    Wang, Feiyi; Oral, H Sarp; Vazhkudai, Sudharshan S

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  3. Highly Multiplexed and Reproducible Ion-Current-Based Strategy for Large-Scale Quantitative Proteomics and the Application to Protein Expression Dynamics Induced by Methylprednisolone in 60 Rats

    PubMed Central

    2015-01-01

    A proteome-level time-series study of drug effects (i.e., pharmacodynamics) is critical for understanding mechanisms of action and systems pharmacology, but is challenging, because of the requirement of a proteomics method for reliable quantification of many biological samples. Here, we describe a highly reproducible strategy, enabling a global, large-scale investigation of the expression dynamics of corticosteroid-regulated proteins in livers from adrenalectomized rats over 11 time points after drug dosing (0.5–66 h, N = 5/point). The analytical advances include (i) exhaustive tissue extraction with a Polytron/sonication procedure in a detergent cocktail buffer, and a cleanup/digestion procedure providing very consistent protein yields (relative standard deviation (RSD%) of 2.7%–6.4%) and peptide recoveries (4.1–9.0%) across the 60 animals; (ii) an ultrahigh-pressure nano-LC setup with substantially improved temperature stabilization, pump-noise suppression, and programmed interface cleaning, enabling excellent reproducibility for continuous analyses of numerous samples; (iii) separation on a 100-cm-long column (2-μm particles) with high reproducibility for days to enable both in-depth profiling and accurate peptide ion-current match; and (iv) well-controlled ion-current-based quantification. To obtain high-quality quantitative data necessary to describe the 11 time-points protein expression temporal profiles, strict criteria were used to define “quantifiable proteins”. A total of 323 drug-responsive proteins were revealed with confidence, and the time profiles of these proteins provided new insights into the diverse temporal changes of biological cascades associated with hepatic metabolism, response to hormone stimuli, gluconeogenesis, inflammatory responses, and protein translation processes. Most profile changes persisted well after the drug was eliminated. The developed strategy can also be broadly applied in preclinical and clinical research, where

  4. Using MPI File Caching to Improve Parallel Write Performance for Large-Scale Scientific Applications

    SciTech Connect

    Liao, Wei-keng; Ching, Avery; Coloma, Kenin; Nisar, Arifa; Choudhary, Alok; Chen, Jackie; Sankaran, Ramanan; Klasky, Scott A

    2007-01-01

    Typical large-scale scientific applications periodically write checkpoint files to save the computational state throughout execution. Existing parallel file systems improve such write-only I/O patterns through the use of clientside file caching and write-behind strategies. In distributed environments where files are rarely accessed by more than one client concurrently, file caching has achieved significant success; however, in parallel applications where multiple clients manipulate a shared file, cache coherence control can serialize I/O. We have designed a thread based caching layer for the MPI I/O library, which adds a portable caching system closer to user applications so more information about the application's I/O patterns is available for better coherence control. We demonstrate the impact of our caching solution on parallel write performance with a comprehensive evaluation that includes a set of widely used I/O benchmarks and production application I/O kernels.

  5. Using MPI file caching to improve parallel write performance for large-scale scientific applications

    SciTech Connect

    Sankaran, Ramanan; Liao, Wei-Keng; Chen, Jacqueline H; Klasky, Scott A; Choudhary, Alok

    2007-01-01

    Typical large-scale scientific applications periodically write checkpoint files to save the computational state throughout execution. Existing parallel file systems improve such write-only I/O patterns through the use of client-side file caching and write-behind strategies. In distributed environments where files are rarely accessed by more than one client concurrently, file caching has achieved significant success; however, in parallel applications where multiple clients manipulate a shared file, cache coherence control can serialize I/O. We have designed a thread based caching layer for the MPI I/O library, which adds a portable caching system closer to user applications so more information about the application's I/O patterns is available for better coherence control. We demonstrate the impact of our caching solution on parallel write performance with a comprehensive evaluation that includes a set of widely used I/O benchmarks and production application I/O kernels.

  6. Improved methods for GRACE-derived groundwater storage change estimation in large-scale agroecosystems

    NASA Astrophysics Data System (ADS)

    Brena, A.; Kendall, A. D.; Hyndman, D. W.

    2013-12-01

    Large-scale agroecosystems are major providers of agricultural commodities and an important component of the world's food supply. In agroecosystems that depend mainly in groundwater, it is well known that their long-term sustainability can be at risk because of water management strategies and climatic trends. The water balance of groundwater-dependent agroecosystems such as the High Plains aquifer (HPA) are often dominated by pumping and irrigation, which enhance hydrological processes such as evapotranspiration, return flow and recharge in cropland areas. This work provides and validates new quantitative groundwater estimation methods for the HPA that combine satellite-based estimates of terrestrial water storage (GRACE), hydrological data assimilation products (NLDAS-2) and in situ measurements of groundwater levels and irrigation rates. The combined data can be used to elucidate the controls of irrigation on the water balance components of agroecosystems, such as crop evapotranspiration, soil moisture deficit and recharge. Our work covers a decade of continuous observations and model estimates from 2003 to 2013, which includes a significant drought since 2011. This study aims to: (1) test the sensitivity of groundwater storage to soil moisture and irrigation, (2) improve estimates of irrigation and soil moisture deficits (3) infer mean values of groundwater recharge across the HPA. The results show (1) significant improvements in GRACE-derived aquifer storage changes using methods that incorporate irrigation and soil moisture deficit data, (2) an acceptable correlation between the observed and estimated aquifer storage time series for the analyzed period, and (3) empirically-estimated annual rates of groundwater recharge that are consistent with previous geochemical and modeling studies. We suggest testing these correction methods in other large-scale agroecosystems with intensive groundwater pumping and irrigation rates.

  7. Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning.

    PubMed

    Stowell, Dan; Plumbley, Mark D

    2014-01-01

    Automatic species classification of birds from their sound is a computational tool of increasing importance in ecology, conservation monitoring and vocal communication studies. To make classification useful in practice, it is crucial to improve its accuracy while ensuring that it can run at big data scales. Many approaches use acoustic measures based on spectrogram-type data, such as the Mel-frequency cepstral coefficient (MFCC) features which represent a manually-designed summary of spectral information. However, recent work in machine learning has demonstrated that features learnt automatically from data can often outperform manually-designed feature transforms. Feature learning can be performed at large scale and "unsupervised", meaning it requires no manual data labelling, yet it can improve performance on "supervised" tasks such as classification. In this work we introduce a technique for feature learning from large volumes of bird sound recordings, inspired by techniques that have proven useful in other domains. We experimentally compare twelve different feature representations derived from the Mel spectrum (of which six use this technique), using four large and diverse databases of bird vocalisations, classified using a random forest classifier. We demonstrate that in our classification tasks, MFCCs can often lead to worse performance than the raw Mel spectral data from which they are derived. Conversely, we demonstrate that unsupervised feature learning provides a substantial boost over MFCCs and Mel spectra without adding computational complexity after the model has been trained. The boost is particularly notable for single-label classification tasks at large scale. The spectro-temporal activations learned through our procedure resemble spectro-temporal receptive fields calculated from avian primary auditory forebrain. However, for one of our datasets, which contains substantial audio data but few annotations, increased performance is not discernible. We

  8. Large-scale inference of protein tissue origin in gram-positive sepsis plasma using quantitative targeted proteomics.

    PubMed

    Malmström, Erik; Kilsgård, Ola; Hauri, Simon; Smeds, Emanuel; Herwald, Heiko; Malmström, Lars; Malmström, Johan

    2016-01-06

    The plasma proteome is highly dynamic and variable, composed of proteins derived from surrounding tissues and cells. To investigate the complex processes that control the composition of the plasma proteome, we developed a mass spectrometry-based proteomics strategy to infer the origin of proteins detected in murine plasma. The strategy relies on the construction of a comprehensive protein tissue atlas from cells and highly vascularized organs using shotgun mass spectrometry. The protein tissue atlas was transformed to a spectral library for highly reproducible quantification of tissue-specific proteins directly in plasma using SWATH-like data-independent mass spectrometry analysis. We show that the method can determine drastic changes of tissue-specific protein profiles in blood plasma from mouse animal models with sepsis. The strategy can be extended to several other species advancing our understanding of the complex processes that contribute to the plasma proteome dynamics.

  9. Large-Scale Proteomics of the Cassava Storage Root and Identification of a Target Gene to Reduce Postharvest Deterioration.

    PubMed

    Vanderschuren, Hervé; Nyaboga, Evans; Poon, Jacquelyne S; Baerenfaller, Katja; Grossmann, Jonas; Hirsch-Hoffmann, Matthias; Kirchgessner, Norbert; Nanni, Paolo; Gruissem, Wilhelm

    2014-05-29

    Cassava (Manihot esculenta) is the most important root crop in the tropics, but rapid postharvest physiological deterioration (PPD) of the root is a major constraint to commercial cassava production. We established a reliable method for image-based PPD symptom quantification and used label-free quantitative proteomics to generate an extensive cassava root and PPD proteome. Over 2600 unique proteins were identified in the cassava root, and nearly 300 proteins showed significant abundance regulation during PPD. We identified protein abundance modulation in pathways associated with oxidative stress, phenylpropanoid biosynthesis (including scopoletin), the glutathione cycle, fatty acid α-oxidation, folate transformation, and the sulfate reduction II pathway. Increasing protein abundances and enzymatic activities of glutathione-associated enzymes, including glutathione reductases, glutaredoxins, and glutathione S-transferases, indicated a key role for ascorbate/glutathione cycles. Based on combined proteomics data, enzymatic activities, and lipid peroxidation assays, we identified glutathione peroxidase as a candidate for reducing PPD. Transgenic cassava overexpressing a cytosolic glutathione peroxidase in storage roots showed delayed PPD and reduced lipid peroxidation as well as decreased H2O2 accumulation. Quantitative proteomics data from ethene and phenylpropanoid pathways indicate additional gene candidates to further delay PPD. Cassava root proteomics data are available at www.pep2pro.ethz.ch for easy access and comparison with other proteomics data. PMID:24876255

  10. Designing a Large-Scale Multilevel Improvement Initiative: The Improving Performance in Practice Program

    ERIC Educational Resources Information Center

    Margolis, Peter A.; DeWalt, Darren A.; Simon, Janet E.; Horowitz, Sheldon; Scoville, Richard; Kahn, Norman; Perelman, Robert; Bagley, Bruce; Miles, Paul

    2010-01-01

    Improving Performance in Practice (IPIP) is a large system intervention designed to align efforts and motivate the creation of a tiered system of improvement at the national, state, practice, and patient levels, assisting primary-care physicians and their practice teams to assess and measurably improve the quality of care for chronic illness and…

  11. Molecular signatures of sanguinarine in human pancreatic cancer cells: A large scale label-free comparative proteomics approach

    PubMed Central

    George, Jasmine; Nihal, Minakshi; Hahn, Molly C. Pellitteri; Scarlett, Cameron O.; Ahmad, Nihal

    2015-01-01

    Pancreatic cancer remains one of the most lethal of all human malignancies with its incidence nearly equaling its mortality rate. Therefore, it's crucial to identify newer mechanism-based agents and targets to effectively manage pancreatic cancer. Plant-derived agents/drugs have historically been useful in cancer therapeutics. Sanguinarine is a plant alkaloid with anti-proliferative effects against cancers, including pancreatic cancer. This study was designed to determine the mechanism of sanguinarine's effects in pancreatic cancer with a hope to obtain useful information to improve the therapeutic options for the management of this neoplasm. We employed a quantitative proteomics approach to define the mechanism of sanguinarine's effects in human pancreatic cancer cells. Proteins from control and sanguinarine-treated pancreatic cancer cells were digested with trypsin, run by nano-LC/MS/MS, and identified with the help of Swiss-Prot database. Results from replicate injections were processed with the SIEVE software to identify proteins with differential expression. We identified 37 differentially expressed proteins (from a total of 3107), which are known to be involved in variety of cellular processes. Four of these proteins (IL33, CUL5, GPS1 and DUSP4) appear to occupy regulatory nodes in key pathways. Further validation by qRT-PCR and immunoblot analyses demonstrated that the dual specificity phosphatase-4 (DUSP4) was significantly upregulated by sanguinarine in BxPC-3 and MIA PaCa-2 cells. Sanguinarine treatment also caused down-regulation of HIF1α and PCNA, and increased cleavage of PARP and Caspase-7. Taken together, sanguinarine appears to have pleotropic effects, as it modulates multiple key signaling pathways, supporting the potential usefulness of sanguinarine against pancreatic cancer. PMID:25929337

  12. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; Klasky, Scott

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  13. Vehicle impoundments improve drinking and driving licence suspension outcomes: Large-scale evidence from Ontario.

    PubMed

    Byrne, Patrick A; Ma, Tracey; Elzohairy, Yoassry

    2016-10-01

    Although vehicle impoundment has become a common sanction for various driving offences, large-scale evaluations of its effectiveness in preventing drinking and driving recidivism are almost non-existent in the peer-reviewed literature. One reason is that impoundment programs have typically been introduced simultaneously with other countermeasures, rendering it difficult to disentangle any observed effects. Previous studies of impoundment effectiveness conducted when such programs were implemented in isolation have typically been restricted to small jurisdictions, making high-quality evaluation difficult. In contrast, Ontario's "long-term" and "seven-day" impoundment programs were implemented in relative isolation, but with tight relationships to already existing drinking and driving suspensions. In this work, we used offence data produced by Ontario's population of over 9 million licensed drivers to perform interrupted time series analysis on drinking and driving recidivism and on rates of driving while suspended for drinking and driving. Our results demonstrate two key findings: (1) impoundment, or its threat, improves compliance with drinking and driving licence suspensions; and (2) addition of impoundment to suspension reduces drinking and driving recidivism, possibly through enhanced suspension compliance.

  14. Large Scale Data Mining to Improve Usability of Data: An Intelligent Archive Testbed

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram; Isaac, David; Yang, Wenli; Morse, Steve

    2005-01-01

    Research in certain scientific disciplines - including Earth science, particle physics, and astrophysics - continually faces the challenge that the volume of data needed to perform valid scientific research can at times overwhelm even a sizable research community. The desire to improve utilization of this data gave rise to the Intelligent Archives project, which seeks to make data archives active participants in a knowledge building system capable of discovering events or patterns that represent new information or knowledge. Data mining can automatically discover patterns and events, but it is generally viewed as unsuited for large-scale use in disciplines like Earth science that routinely involve very high data volumes. Dozens of research projects have shown promising uses of data mining in Earth science, but all of these are based on experiments with data subsets of a few gigabytes or less, rather than the terabytes or petabytes typically encountered in operational systems. To bridge this gap, the Intelligent Archives project is establishing a testbed with the goal of demonstrating the use of data mining techniques in an operationally-relevant environment. This paper discusses the goals of the testbed and the design choices surrounding critical issues that arose during testbed implementation.

  15. Improvement of Baltic proper water quality using large-scale ecological engineering.

    PubMed

    Stigebrandt, Anders; Gustafsson, Bo G

    2007-04-01

    Eutrophication of the Baltic proper has led to impaired water quality, demonstrated by, e.g., extensive blooming of cyanobacteria during the premium summer holiday season and severe oxygen deficit in the deepwater. Sustainable improvements in water quality by the reduction of phosphorus (P) supplies will take several decades before giving full effects because of large P storages both in soils in the watershed and in the water column and bottom sediments of the Baltic proper. In this article it is shown that drastically improved water quality may be obtained within a few years using large-scale ecological engineering methods. Natural variations in the Baltic proper during the last decades have demonstrated how rapid improvements may be achieved. The present article describes the basic dynamics of P, organic matter, and oxygen in the Baltic proper. It also briefly discusses the advantages and disadvantages of different classes of methods of ecological engineering aimed at restoring the Baltic proper from eutrophication effects. Preliminary computations show that the P content might be halved within a few years if about 100 kg O2 s(-1) are supplied to the upper deepwater. This would require 100 pump stations, each transporting about 100 m3 s(-1) of oxygen-rich so-called winter water from about 50 to 125 m depth where the water is released as a buoyant jet. Each pump station needs a power supply of 0.6 MW. Offshore wind power technology seems mature enough to provide the power needed by the pump stations. The cost to install 100 wind-powered pump stations, each with 0.6 MW power, at about 125-m depth is about 200 million Euros. PMID:17520945

  16. Improvement of Baltic proper water quality using large-scale ecological engineering.

    PubMed

    Stigebrandt, Anders; Gustafsson, Bo G

    2007-04-01

    Eutrophication of the Baltic proper has led to impaired water quality, demonstrated by, e.g., extensive blooming of cyanobacteria during the premium summer holiday season and severe oxygen deficit in the deepwater. Sustainable improvements in water quality by the reduction of phosphorus (P) supplies will take several decades before giving full effects because of large P storages both in soils in the watershed and in the water column and bottom sediments of the Baltic proper. In this article it is shown that drastically improved water quality may be obtained within a few years using large-scale ecological engineering methods. Natural variations in the Baltic proper during the last decades have demonstrated how rapid improvements may be achieved. The present article describes the basic dynamics of P, organic matter, and oxygen in the Baltic proper. It also briefly discusses the advantages and disadvantages of different classes of methods of ecological engineering aimed at restoring the Baltic proper from eutrophication effects. Preliminary computations show that the P content might be halved within a few years if about 100 kg O2 s(-1) are supplied to the upper deepwater. This would require 100 pump stations, each transporting about 100 m3 s(-1) of oxygen-rich so-called winter water from about 50 to 125 m depth where the water is released as a buoyant jet. Each pump station needs a power supply of 0.6 MW. Offshore wind power technology seems mature enough to provide the power needed by the pump stations. The cost to install 100 wind-powered pump stations, each with 0.6 MW power, at about 125-m depth is about 200 million Euros.

  17. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  18. The Proteomic Landscape of the Suprachiasmatic Nucleus Clock Reveals Large-Scale Coordination of Key Biological Processes

    PubMed Central

    Chiang, Cheng-Kang; Mehta, Neel; Patel, Abhilasha; Zhang, Peng; Ning, Zhibin; Mayne, Janice; Sun, Warren Y. L.

    2014-01-01

    The suprachiasmatic nucleus (SCN) acts as the central clock to coordinate circadian oscillations in mammalian behavior, physiology and gene expression. Despite our knowledge of the circadian transcriptome of the SCN, how it impacts genome-wide protein expression is not well understood. Here, we interrogated the murine SCN proteome across the circadian cycle using SILAC-based quantitative mass spectrometry. Of the 2112 proteins that were accurately quantified, 20% (421 proteins) displayed a time-of-day-dependent expression profile. Within this time-of-day proteome, 11% (48 proteins) were further defined as circadian based on a sinusoidal expression pattern with a ∼24 h period. Nine circadianly expressed proteins exhibited 24 h rhythms at the transcript level, with an average time lag that exceeded 8 h. A substantial proportion of the time-of-day proteome exhibited abrupt fluctuations at the anticipated light-to-dark and dark-to-light transitions, and was enriched for proteins involved in several key biological pathways, most notably, mitochondrial oxidative phosphorylation. Additionally, predicted targets of miR-133ab were enriched in specific hierarchical clusters and were inversely correlated with miR133ab expression in the SCN. These insights into the proteomic landscape of the SCN will facilitate a more integrative understanding of cellular control within the SCN clock. PMID:25330117

  19. School Improvement Networks as a Strategy for Large-Scale Education Reform: The Role of Educational Environments

    ERIC Educational Resources Information Center

    Glazer, Joshua L.; Peurach, Donald J.

    2013-01-01

    The development and scale-up of school improvement networks is among the most important educational innovations of the last decade, and current federal, state, and district efforts attempt to use school improvement networks as a mechanism for supporting large-scale change. The potential of improvement networks, however, rests on the extent to…

  20. Improvement of methods for large scale sequencing; application to human Xq28

    SciTech Connect

    Gibbs, R.A.; Andersson, B.; Wentland, M.A.

    1994-09-01

    Sequencing of a one-metabase region of Xq28, spanning the FRAXA and IDS loci has been undertaken in order to investigate the practicality of the shotgun approach for large scale sequencing and as a platform to develop improved methods. The efficiency of several steps in the shotgun sequencing strategy has been increased using PCR-based approaches. An improved method for preparation of M13 libraries has been developed. This protocol combines a previously described adaptor-based protocol with the uracil DNA glycosylase (UDG)-cloning procedure. The efficiency of this procedure has been found to be up to 100-fold higher than that of previously used protocols. In addition the novel protocol is more reliable and thus easy to establish in a laboratory. The method has also been adapted for the simultaneous shotgun sequencing of multiple short fragments by concentrating them before library construction is presented. This protocol is suitable for rapid characterization of cDNA clones. A library was constructed from 15 PCR-amplified and concentrated human cDNA inserts, and the insert sequences could easily be identified as separate contigs during the assembly process and the sequence coverage was even along each fragment. Using this strategy, the fine structures of the FraxA and IDS loci have been revealed and several EST homologies indicating novel expressed sequences have been identified. Use of PCR to close repetitive regions that are difficult to clone was tested by determination of the sequence of a cosmid mapping DXS455 in Xq28, containing a polymorphic VNTR. The region containing the VNTR was not represented in the shotgun library, but by designing PCR primers in the sequences flanking the gap and by cloning and sequencing the PCR product, the fine structure of the VNTR has been determined. It was found to be an AT-rich VNTR with a repeated 25-mer at the center.

  1. Improving the local relevance of large scale water demand predictions: the way forward

    NASA Astrophysics Data System (ADS)

    Bernhard, Jeroen; Reynaud, Arnaud; de Roo, Ad

    2016-04-01

    use and water prices. Subsequently, econometric estimates allow us to make a monetary valuation of water and identify the dominant drivers of domestic and industrial water demand per country. Combined with socio-economic, demographic and climate scenarios we made predictions for future Europe. Since this is a first attempt we obtained mixed results between countries when it comes to data availability and therefore model uncertainty. For some countries we have been able to develop robust predictions based on vast amounts of data while some other countries proved more challenging. We do feel however, that large scale predictions based on regional data are the way forward to provide relevant scientific policy support. In order to improve on our work it is imperative to further expand our database of consistent regional data. We are looking forward to any kind of input and would be very interested in sharing our data to collaborate towards a better understanding of the water use system.

  2. Developmental and Subcellular Organization of Single-Cell C₄ Photosynthesis in Bienertia sinuspersici Determined by Large-Scale Proteomics and cDNA Assembly from 454 DNA Sequencing.

    PubMed

    Offermann, Sascha; Friso, Giulia; Doroshenk, Kelly A; Sun, Qi; Sharpe, Richard M; Okita, Thomas W; Wimmer, Diana; Edwards, Gerald E; van Wijk, Klaas J

    2015-05-01

    Kranz C4 species strictly depend on separation of primary and secondary carbon fixation reactions in different cell types. In contrast, the single-cell C4 (SCC4) species Bienertia sinuspersici utilizes intracellular compartmentation including two physiologically and biochemically different chloroplast types; however, information on identity, localization, and induction of proteins required for this SCC4 system is currently very limited. In this study, we determined the distribution of photosynthesis-related proteins and the induction of the C4 system during development by label-free proteomics of subcellular fractions and leaves of different developmental stages. This was enabled by inferring a protein sequence database from 454 sequencing of Bienertia cDNAs. Large-scale proteome rearrangements were observed as C4 photosynthesis developed during leaf maturation. The proteomes of the two chloroplasts are different with differential accumulation of linear and cyclic electron transport components, primary and secondary carbon fixation reactions, and a triose-phosphate shuttle that is shared between the two chloroplast types. This differential protein distribution pattern suggests the presence of a mRNA or protein-sorting mechanism for nuclear-encoded, chloroplast-targeted proteins in SCC4 species. The combined information was used to provide a comprehensive model for NAD-ME type carbon fixation in SCC4 species.

  3. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0

    NASA Astrophysics Data System (ADS)

    The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas

    2016-08-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.

  4. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0

    NASA Astrophysics Data System (ADS)

    The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.

  5. Transcriptomic and proteomic responses of Serratia marcescens to spaceflight conditions involve large-scale changes in metabolic pathways

    NASA Astrophysics Data System (ADS)

    Wang, Yajuan; Yuan, Yanting; Liu, Jinwen; Su, Longxiang; Chang, De; Guo, Yinghua; Chen, Zhenhong; Fang, Xiangqun; Wang, Junfeng; Li, Tianzhi; Zhou, Lisha; Fang, Chengxiang; Yang, Ruifu; Liu, Changting

    2014-04-01

    The microgravity environment of spaceflight expeditions has been associated with altered microbial responses. This study explores the characterization of Serratia marcescensis grown in a spaceflight environment at the phenotypic, transcriptomic and proteomic levels. From November 1, 2011 to November 17, 2011, a strain of S. marcescensis was sent into space for 398 h on the Shenzhou VIII spacecraft, and ground simulation was performed as a control (LCT-SM213). After the flight, two mutant strains (LCT-SM166 and LCT-SM262) were selected for further analysis. Although no changes in the morphology, post-culture growth kinetics, hemolysis or antibiotic sensitivity were observed, the two mutant strains exhibited significant changes in their metabolic profiles after exposure to spaceflight. Enrichment analysis of the transcriptome showed that the differentially expressed genes of the two spaceflight strains and the ground control strain mainly included those involved in metabolism and degradation. The proteome revealed that changes at the protein level were also associated with metabolic functions, such as glycolysis/gluconeogenesis, pyruvate metabolism, arginine and proline metabolism and the degradation of valine, leucine and isoleucine. In summary S. marcescens showed alterations primarily in genes and proteins that were associated with metabolism under spaceflight conditions, which gave us valuable clues for future research.

  6. Comparison of the large-scale periplasmic proteomes of the Escherichia coli K-12 and B strains.

    PubMed

    Han, Mee-Jung; Kim, Jin Young; Kim, Jung A

    2014-04-01

    Escherichia coli typically secretes many proteins into the periplasmic space, and the periplasmic proteins have been used for the secretory production of various proteins by the biotechnology industry. However, the identity of all of the E. coli periplasmic proteins remains unknown. Here, high-resolution periplasmic proteome reference maps of the E. coli K-12 and B strains were constructed and compared. Of the 145 proteins identified by tandem mass spectrometry, 61 proteins were conserved in the two strains, whereas 11 and 12 strain-specific proteins were identified for the E. coli K-12 and B strains, respectively. In addition, 27 proteins exhibited differences in intensities greater than 2-fold between the K-12 and B strains. The periplasmic proteins MalE and OppA were the most abundant proteins in the two E. coli strains. Distinctive differences between the two strains included several proteins that were caused by genetic variations, such as CybC, FliC, FliY, KpsD, MglB, ModA, and Ybl119, hydrolytic enzymes, particularly phosphatases, glycosylases, and proteases, and many uncharacterized proteins. Compared to previous studies, the localization of many proteins, including 30 proteins for the K-12 strain and 53 proteins for the B strain, was newly identified as periplasmic. This study identifies the largest number of proteins in the E. coli periplasm as well as the dynamics of these proteins. Additionally, these findings are summarized as reference proteome maps that will be useful for studying protein secretion and may provide new strategies for the enhanced secretory production of recombinant proteins.

  7. Maximizing the sensitivity and reliability of peptide identification in large-scale proteomic experiments by harnessing multiple search engines.

    PubMed

    Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D

    2010-03-01

    Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.

  8. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing

    PubMed Central

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-01-01

    Motivation: Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. Methods: We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the ‘learning to rank’ framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. Results: DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. Availability and Implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307646

  9. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    SciTech Connect

    Jakob, Christian

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  10. Improving irrigation efficiency in Italian apple orchards: A large-scale approach

    NASA Astrophysics Data System (ADS)

    Della Chiesa, Stefano; la Cecilia, Daniele; Niedrist, Georg; Hafner, Hansjörg; Thalheimer, Martin; Tappeiner, Ulrike

    2016-04-01

    Nord-Italian region South Tyrol is Europe's largest apple growing area. In order to enable an economically relevant fruit quality and quantity the relative dry climate of the region 450-700mm gets compensated by a large scale irrigation management which until now follows old, traditional rights. Due to ongoing climatic changes and rising public sensitivity toward sustainable usage of water resources, irrigation practices are more and more critically discussed. In order to establish an objective and quantitative base of information to optimise irrigation practice, 17 existing microclimatic stations were upgraded with soil moisture and soil water potential sensors. As a second information layer a data set of 20,000 soil analyses has been geo-referenced and spatialized using a modern geostatistical method. Finally, to assess whether the zones with shallow aquifer influence soil water availability, data of 70 groundwater depth measuring stations were retrieved. The preliminary results highlight that in many locations in particular in the valley bottoms irrigation largely exceeds plant water needs because either the shallow aquifer provides sufficient water supply by capillary rise processes into the root zone or irrigation is applied without accounting for the specific soil properties.

  11. Large-Scale Multiplexed Quantitative Discovery Proteomics Enabled by the Use of an O-18-Labeled “Universal” Reference Sample

    SciTech Connect

    Qian, Weijun; Liu, Tao; Petyuk, Vladislav A.; Gritsenko, Marina A.; Petritis, Brianne O.; Polpitiya, Ashoka D.; Kaushal, Amit; Xiao, Wenzhong; Finnerty, Celeste C.; Jescheke, Marc G.; Jaitly, Navdeep; Monroe, Matthew E.; Moore, Ronald J.; Moldawer, Lyle L.; Davis, Ronald W.; Tompkins, Ronald G.; Hemdon, David N.; Camp, David G.; Smith, Richard D.

    2009-01-01

    Quantitative comparison of protein abundances across a relatively large number of patient samples is an important challenge for clinical proteomic applications. Herein we describe a dual-quantitation strategy that allows the simultaneous integration of complementary label-free and stable isotope labeling based approaches without increasing the number of LC-MS analyses. The approach utilizes a stable isotope 18O-labeled “universal” reference sample as a comprehensive set of internal standards spiked into each individually processed unlabeled patient sample. The quantitative data are based on both the direct 16O-MS intensities for label-free quantitation and the 16O/18O isotopic peptide pair ratios that compare each patient sample to the identical labeled reference. The effectiveness of this dual-quantitation approach for large scale quantitative proteomics is demonstrated by the application to a set of 38 clinical plasma samples from surviving and non-surviving severe burn patients. With the coupling of immunoaffinity depletion, cysteinyl-peptide enrichment based fractionation, high resolution LC-MS measurements, and the dual-quantitation approach, a total of 318 proteins were confidently quantified with at least two peptides and 263 proteins were quantified by both approaches. The strategy also enabled a direct comparison between the two approaches with the labeling approach showing significantly better precision in quantitation while the label-free approach resulted in more protein identifications. The relative abundance differences determined by the two approaches also show strong correlation. Finally, the dual-quantitation strategy allowed us to identify more candidate protein biomarkers, illustrating the complementary nature of the two quantitative methods.

  12. Leveraging Technology to Improve Developmental Mathematics Course Completion: Evaluation of a Large-Scale Intervention

    ERIC Educational Resources Information Center

    Wladis, Claire; Offenholley, Kathleen; George, Michael

    2014-01-01

    This study hypothesizes that course passing rates in remedial mathematics classes can be improved through early identification of at-risk students using a department-wide midterm, followed by a mandated set of online intervention assignments incorporating immediate and elaborate feedback for all students identified as "at-risk" by their…

  13. A practical improvement, enhancing the large-scale synthesis of (+)-discodermolide: a third-generation approach.

    PubMed

    Smith, Amos B; Freeze, B Scott; Brouard, Ignacio; Hirose, Tomoyasu

    2003-11-13

    [reaction: see text] A significant improvement to the Penn one-gram synthesis of (+)-discodermolide (1) has been achieved. Specifically, reduction of the steric bulk of the C(11) hydroxyl protecting group permits formation of the requisite AB Wittig salt at the expense of the undesired intramolecular cyclization upon treatment with PPh(3) at ambient pressure.

  14. Improving Large-Scale Testing Capability by Modifying the 40- by 80-ft Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Mort, Kenneth W.; Soderman, Paul T.; Eckert, William T.

    1979-01-01

    Interagency studies conducted during the last several years have indicated the need to Improve full-scale testing capabilities. The studies showed that the most effective trade between test capability and facility cost was provided by re-powering the existing Ames Research Center 40- by 80-ft Wind Tunnel to Increase the maximum speed from about 100 m/s (200 knots) lo about 150 m/s (300 knots) and by adding a new 24- by 37-m (80- by 120-ft) test section powered for about a 50-m/s (100-knot) maximum speed. This paper reviews the design of the facility, a few or its capabilities, and some of its unique features.

  15. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    NASA Astrophysics Data System (ADS)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  16. Social Aspects of Photobooks: Improving Photobook Authoring from Large-Scale Multimedia Analysis

    NASA Astrophysics Data System (ADS)

    Sandhaus, Philipp; Boll, Susanne

    With photo albums we aim to capture personal events such as weddings, vacations, and parties of family and friends. By arranging photo prints, captions and paper souvenirs such as tickets over the pages of a photobook we tell a story to capture and share our memories. The photo memories captured in such a photobook tell us much about the content and the relevance of the photos for the user. The way in which we select photos and arrange them in the photo album reveal a lot about the events, persons and places on the photos: captions describe content, closeness and arrangement of photos express relations between photos and their content and especially about the social relations of the author and the persons present in the album. Nowadays the process of photo album authoring has become digital, photos and texts can be arranged and laid out with the help of authoring tools in a digital photo album which can be printed as a physical photobook. In this chapter we present results of the analysis of a large repository of digitally mastered photobooks to learn about their social aspects. We explore to which degree a social aspect can be identified and how expressive and vivid different classes of photobooks are. The photobooks are anonymized, real world photobooks from customers of our industry partner CeWe Color. The knowledge gained from this social photobook analysis is meant both to better understand how people author their photobooks and to improve the automatic selection of and layout of photobooks.

  17. Large scale analysis of the mutational landscape in HT-SELEX improves aptamer discovery

    PubMed Central

    Hoinka, Jan; Berezhnoy, Alexey; Dao, Phuong; Sauna, Zuben E.; Gilboa, Eli; Przytycka, Teresa M.

    2015-01-01

    High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut—a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the ‘parent’ sequence and AptaCluster—an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. PMID:25870409

  18. Multimethod study of a large-scale programme to improve patient safety using a harm-free care approach

    PubMed Central

    Power, Maxine; Brewster, Liz; Parry, Gareth; Brotherton, Ailsa; Minion, Joel; Ozieranski, Piotr; McNicol, Sarah; Harrison, Abigail; Dixon-Woods, Mary

    2016-01-01

    Objectives We aimed to evaluate whether a large-scale two-phase quality improvement programme achieved its aims and to characterise the influences on achievement. Setting National Health Service (NHS) in England. Participants NHS staff. Interventions The programme sought to (1) develop a shared national, regional and locally aligned safety focus for 4 high-cost, high volume harms; (2) establish a new measurement system based on a composite measure of ‘harm-free’ care and (3) deliver improved outcomes. Phase I involved a quality improvement collaborative intended to involve 100 organisations; phase II used financial incentives for data collection. Measures Multimethod evaluation of the programme. In phase I, analysis of regional plans and of rates of data submission and clinical outcomes reported to the programme. A concurrent process evaluation was conducted of phase I, but only data on submission rates and clinical outcomes were available for phase II. Results A context of extreme policy-related structural turbulence impacted strongly on phase I. Most regions' plans did not demonstrate full alignment with the national programme; most fell short of recruitment targets and attrition in attendance at the collaborative meetings occurred over time. Though collaborative participants saw the principles underlying the programme as attractive, useful and innovative, they often struggled to convert enthusiasm into change. Developing the measurement system was arduous, yet continued to be met by controversy. Data submission rates remained patchy throughout phase I but improved in reach and consistency in phase II in response to financial incentives. Some evidence of improvement in clinical outcomes over time could be detected but was hard to interpret owing to variability in the denominators. Conclusions These findings offer important lessons for large-scale improvement programmes, particularly when they seek to develop novel concepts and measures. External contexts may

  19. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    NASA Astrophysics Data System (ADS)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  20. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  1. Improving the Communication Pattern in Matrix-Vector Operations for Large Scale-Free Graphs by Disaggregation

    SciTech Connect

    Kuhlemann, Verena; Vassilevski, Panayot S.

    2013-10-28

    Matrix-vector multiplication is the key operation in any Krylov-subspace iteration method. We are interested in Krylov methods applied to problems associated with the graph Laplacian arising from large scale-free graphs. Furthermore, computations with graphs of this type on parallel distributed-memory computers are challenging. This is due to the fact that scale-free graphs have a degree distribution that follows a power law, and currently available graph partitioners are not efficient for such an irregular degree distribution. The lack of a good partitioning leads to excessive interprocessor communication requirements during every matrix-vector product. Here, we present an approach to alleviate this problem based on embedding the original irregular graph into a more regular one by disaggregating (splitting up) vertices in the original graph. The matrix-vector operations for the original graph are performed via a factored triple matrix-vector product involving the embedding graph. And even though the latter graph is larger, we are able to decrease the communication requirements considerably and improve the performance of the matrix-vector product.

  2. Large-Scale Proteomics of the Cassava Storage Root and Identification of a Target Gene to Reduce Postharvest Deterioration[C][W][OPEN

    PubMed Central

    Vanderschuren, Hervé; Nyaboga, Evans; Poon, Jacquelyne S.; Baerenfaller, Katja; Grossmann, Jonas; Hirsch-Hoffmann, Matthias; Kirchgessner, Norbert; Nanni, Paolo; Gruissem, Wilhelm

    2014-01-01

    Cassava (Manihot esculenta) is the most important root crop in the tropics, but rapid postharvest physiological deterioration (PPD) of the root is a major constraint to commercial cassava production. We established a reliable method for image-based PPD symptom quantification and used label-free quantitative proteomics to generate an extensive cassava root and PPD proteome. Over 2600 unique proteins were identified in the cassava root, and nearly 300 proteins showed significant abundance regulation during PPD. We identified protein abundance modulation in pathways associated with oxidative stress, phenylpropanoid biosynthesis (including scopoletin), the glutathione cycle, fatty acid α-oxidation, folate transformation, and the sulfate reduction II pathway. Increasing protein abundances and enzymatic activities of glutathione-associated enzymes, including glutathione reductases, glutaredoxins, and glutathione S-transferases, indicated a key role for ascorbate/glutathione cycles. Based on combined proteomics data, enzymatic activities, and lipid peroxidation assays, we identified glutathione peroxidase as a candidate for reducing PPD. Transgenic cassava overexpressing a cytosolic glutathione peroxidase in storage roots showed delayed PPD and reduced lipid peroxidation as well as decreased H2O2 accumulation. Quantitative proteomics data from ethene and phenylpropanoid pathways indicate additional gene candidates to further delay PPD. Cassava root proteomics data are available at www.pep2pro.ethz.ch for easy access and comparison with other proteomics data. PMID:24876255

  3. NAEP Validity Studies: Improving the Information Value of Performance Items in Large Scale Assessments. Working Paper No. 2003-08

    ERIC Educational Resources Information Center

    Pearson, P. David; Garavaglia, Diane R.

    2003-01-01

    The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…

  4. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  5. Proteomics: a biotechnology tool for crop improvement

    PubMed Central

    Eldakak, Moustafa; Milad, Sanaa I. M.; Nawar, Ali I.; Rohila, Jai S.

    2013-01-01

    A sharp decline in the availability of arable land and sufficient supply of irrigation water along with a continuous steep increase in food demands have exerted a pressure on farmers to produce more with fewer resources. A viable solution to release this pressure is to speed up the plant breeding process by employing biotechnology in breeding programs. The majority of biotechnological applications rely on information generated from various -omic technologies. The latest outstanding improvements in proteomic platforms and many other but related advances in plant biotechnology techniques offer various new ways to encourage the usage of these technologies by plant scientists for crop improvement programs. A combinatorial approach of accelerated gene discovery through genomics, proteomics, and other associated -omic branches of biotechnology, as an applied approach, is proving to be an effective way to speed up the crop improvement programs worldwide. In the near future, swift improvements in -omic databases are becoming critical and demand immediate attention for the effective utilization of these techniques to produce next-generation crops for the progressive farmers. Here, we have reviewed the recent advances in proteomics, as tools of biotechnology, which are offering great promise and leading the path toward crop improvement for sustainable agriculture. PMID:23450788

  6. Toward system-level understanding of baculovirus–host cell interactions: from molecular fundamental studies to large-scale proteomics approaches

    PubMed Central

    Monteiro, Francisca; Carinhas, Nuno; Carrondo, Manuel J. T.; Bernal, Vicente; Alves, Paula M.

    2012-01-01

    Baculoviruses are insect viruses extensively exploited as eukaryotic protein expression vectors. Molecular biology studies have provided exciting discoveries on virus–host interactions, but the application of omic high-throughput techniques on the baculovirus–insect cell system has been hampered by the lack of host genome sequencing. While a broader, systems-level analysis of biological responses to infection is urgently needed, recent advances on proteomic studies have yielded new insights on the impact of infection on the host cell. These works are reviewed and critically assessed in the light of current biological knowledge of the molecular biology of baculoviruses and insect cells. PMID:23162544

  7. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  8. Leveraging Genomics Software to Improve Proteomics Results

    SciTech Connect

    Fodor, I K; Nelson, D O

    2005-09-06

    Rigorous data analysis techniques are essential in quantifying the differential expression of proteins in biological samples of interest. Statistical methods from the microarray literature were applied to the analysis of two-dimensional difference gel electrophoresis (2-D DIGE) proteomics experiments, in the context of technical variability studies involving human plasma. Protein expression measurements were corrected to account for observed intensity-dependent biases within gels, and normalized to mitigate observed gel to gel variations. The methods improved upon the results achieved using the best currently available 2-D DIGE proteomics software. The spot-wise protein variance was reduced by 10% and the number of apparently differentially expressed proteins was reduced by over 50%.

  9. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  10. Atomistic Origin of Brittle Failure of Boron Carbide from Large-Scale Reactive Dynamics Simulations: Suggestions toward Improved Ductility.

    PubMed

    An, Qi; Goddard, William A

    2015-09-01

    Ceramics are strong, but their low fracture toughness prevents extended engineering applications. In particular, boron carbide (B(4)C), the third hardest material in nature, has not been incorporated into many commercial applications because it exhibits anomalous failure when subjected to hypervelocity impact. To determine the atomistic origin of this brittle failure, we performed large-scale (∼200,000  atoms/cell) reactive-molecular-dynamics simulations of shear deformations of B(4)C, using the quantum-mechanics-derived reactive force field simulation. We examined the (0001)/⟨101̅0⟩ slip system related to deformation twinning and the (011̅1̅)/⟨1̅101⟩ slip system related to amorphous band formation. We find that brittle failure in B(4)C arises from formation of higher density amorphous bands due to fracture of the icosahedra, a unique feature of these boron based materials. This leads to negative pressure and cavitation resulting in crack opening. Thus, to design ductile materials based on B(4)C we propose alloying aimed at promoting shear relaxation through intericosahedral slip that avoids icosahedral fracture.

  11. Atomistic Origin of Brittle Failure of Boron Carbide from Large-Scale Reactive Dynamics Simulations: Suggestions toward Improved Ductility

    NASA Astrophysics Data System (ADS)

    An, Qi; Goddard, William A.

    2015-09-01

    Ceramics are strong, but their low fracture toughness prevents extended engineering applications. In particular, boron carbide (B4C ), the third hardest material in nature, has not been incorporated into many commercial applications because it exhibits anomalous failure when subjected to hypervelocity impact. To determine the atomistic origin of this brittle failure, we performed large-scale (˜200 000 atoms /cell ) reactive-molecular-dynamics simulations of shear deformations of B4C , using the quantum-mechanics-derived reactive force field simulation. We examined the (0001 )/⟨10 1 ¯ 0 ⟩ slip system related to deformation twinning and the (01 1 ¯ 1 ¯ )/⟨1 ¯ 101 ⟩ slip system related to amorphous band formation. We find that brittle failure in B4C arises from formation of higher density amorphous bands due to fracture of the icosahedra, a unique feature of these boron based materials. This leads to negative pressure and cavitation resulting in crack opening. Thus, to design ductile materials based on B4C we propose alloying aimed at promoting shear relaxation through intericosahedral slip that avoids icosahedral fracture.

  12. Large-scale proteome analysis of abscisic acid and ABSCISIC ACID INSENSITIVE3-dependent proteins related to desiccation tolerance in Physcomitrella patens.

    PubMed

    Yotsui, Izumi; Serada, Satoshi; Naka, Tetsuji; Saruhashi, Masashi; Taji, Teruaki; Hayashi, Takahisa; Quatrano, Ralph S; Sakata, Yoichi

    2016-03-18

    Desiccation tolerance is an ancestral feature of land plants and is still retained in non-vascular plants such as bryophytes and some vascular plants. However, except for seeds and spores, this trait is absent in vegetative tissues of vascular plants. Although many studies have focused on understanding the molecular basis underlying desiccation tolerance using transcriptome and proteome approaches, the critical molecular differences between desiccation tolerant plants and non-desiccation plants are still not clear. The moss Physcomitrella patens cannot survive rapid desiccation under laboratory conditions, but if cells of the protonemata are treated by the phytohormone abscisic acid (ABA) prior to desiccation, it can survive 24 h exposure to desiccation and regrow after rehydration. The desiccation tolerance induced by ABA (AiDT) is specific to this hormone, but also depends on a plant transcription factor ABSCISIC ACID INSENSITIVE3 (ABI3). Here we report the comparative proteomic analysis of AiDT between wild type and ABI3 deleted mutant (Δabi3) of P. patens using iTRAQ (Isobaric Tags for Relative and Absolute Quantification). From a total of 1980 unique proteins that we identified, only 16 proteins are significantly altered in Δabi3 compared to wild type after desiccation following ABA treatment. Among this group, three of the four proteins that were severely affected in Δabi3 tissue were Arabidopsis orthologous genes, which were expressed in maturing seeds under the regulation of ABI3. These included a Group 1 late embryogenesis abundant (LEA) protein, a short-chain dehydrogenase, and a desiccation-related protein. Our results suggest that at least three of these proteins expressed in desiccation tolerant cells of both Arabidopsis and the moss are very likely to play important roles in acquisition of desiccation tolerance in land plants. Furthermore, our results suggest that the regulatory machinery of ABA- and ABI3-mediated gene expression for desiccation

  13. Large-scale proteome analysis of abscisic acid and ABSCISIC ACID INSENSITIVE3-dependent proteins related to desiccation tolerance in Physcomitrella patens.

    PubMed

    Yotsui, Izumi; Serada, Satoshi; Naka, Tetsuji; Saruhashi, Masashi; Taji, Teruaki; Hayashi, Takahisa; Quatrano, Ralph S; Sakata, Yoichi

    2016-03-18

    Desiccation tolerance is an ancestral feature of land plants and is still retained in non-vascular plants such as bryophytes and some vascular plants. However, except for seeds and spores, this trait is absent in vegetative tissues of vascular plants. Although many studies have focused on understanding the molecular basis underlying desiccation tolerance using transcriptome and proteome approaches, the critical molecular differences between desiccation tolerant plants and non-desiccation plants are still not clear. The moss Physcomitrella patens cannot survive rapid desiccation under laboratory conditions, but if cells of the protonemata are treated by the phytohormone abscisic acid (ABA) prior to desiccation, it can survive 24 h exposure to desiccation and regrow after rehydration. The desiccation tolerance induced by ABA (AiDT) is specific to this hormone, but also depends on a plant transcription factor ABSCISIC ACID INSENSITIVE3 (ABI3). Here we report the comparative proteomic analysis of AiDT between wild type and ABI3 deleted mutant (Δabi3) of P. patens using iTRAQ (Isobaric Tags for Relative and Absolute Quantification). From a total of 1980 unique proteins that we identified, only 16 proteins are significantly altered in Δabi3 compared to wild type after desiccation following ABA treatment. Among this group, three of the four proteins that were severely affected in Δabi3 tissue were Arabidopsis orthologous genes, which were expressed in maturing seeds under the regulation of ABI3. These included a Group 1 late embryogenesis abundant (LEA) protein, a short-chain dehydrogenase, and a desiccation-related protein. Our results suggest that at least three of these proteins expressed in desiccation tolerant cells of both Arabidopsis and the moss are very likely to play important roles in acquisition of desiccation tolerance in land plants. Furthermore, our results suggest that the regulatory machinery of ABA- and ABI3-mediated gene expression for desiccation

  14. Improving Large-scale Biomass Burning Carbon Consumption and Emissions Estimates in the Former Soviet Union based on Fire Weather

    NASA Astrophysics Data System (ADS)

    Westberg, D. J.; Soja, A. J.; Tchebakova, N.; Parfenova, E. I.; Kukavskaya, E.; de Groot, B.; McRae, D.; Conard, S. G.; Stackhouse, P. W., Jr.

    2012-12-01

    Estimating the amount of biomass burned during fire events is challenging, particularly in remote and diverse regions, like those of the Former Soviet Union (FSU). Historically, we have typically assumed 25 tons of carbon per hectare (tC/ha) is emitted, however depending on the ecosystem and severity, biomass burning emissions can range from 2 to 75 tC/ha. Ecosystems in the FSU span from the tundra through the taiga to the forest-steppe, steppe and desserts and include the extensive West Siberian lowlands, permafrost-lain forests and agricultural lands. Excluding this landscape disparity results in inaccurate emissions estimates and incorrect assumptions in the transport of these emissions. In this work, we present emissions based on a hybrid ecosystem map and explicit estimates of fuel that consider the depth of burning based on the Canadian Forest Fire Weather Index System. Specifically, the ecosystem map is a fusion of satellite-based data, a detailed ecosystem map and Alexeyev and Birdsey carbon storage data, which is used to build carbon databases that include the forest overstory and understory, litter, peatlands and soil organic material for the FSU. We provide a range of potential carbon consumption estimates for low- to high-severity fires across the FSU that can be used with fire weather indices to more accurately estimate fire emissions. These data can be incorporated at ecoregion and administrative territory scales and are optimized for use in large-scale Chemical Transport Models. Additionally, paired with future climate scenarios and ecoregion cover, these carbon consumption data can be used to estimate potential emissions.

  15. Improved large-scale prediction of growth inhibition patterns using the NCI60 cancer cell line panel

    PubMed Central

    Cortés-Ciriano, Isidro; van Westen, Gerard J. P.; Bouvier, Guillaume; Nilges, Michael; Overington, John P.; Bender, Andreas; Malliavin, Thérèse E.

    2016-01-01

    Motivation: Recent large-scale omics initiatives have catalogued the somatic alterations of cancer cell line panels along with their pharmacological response to hundreds of compounds. In this study, we have explored these data to advance computational approaches that enable more effective and targeted use of current and future anticancer therapeutics. Results: We modelled the 50% growth inhibition bioassay end-point (GI50) of 17 142 compounds screened against 59 cancer cell lines from the NCI60 panel (941 831 data-points, matrix 93.08% complete) by integrating the chemical and biological (cell line) information. We determine that the protein, gene transcript and miRNA abundance provide the highest predictive signal when modelling the GI50 endpoint, which significantly outperformed the DNA copy-number variation or exome sequencing data (Tukey’s Honestly Significant Difference, P <0.05). We demonstrate that, within the limits of the data, our approach exhibits the ability to both interpolate and extrapolate compound bioactivities to new cell lines and tissues and, although to a lesser extent, to dissimilar compounds. Moreover, our approach outperforms previous models generated on the GDSC dataset. Finally, we determine that in the cases investigated in more detail, the predicted drug-pathway associations and growth inhibition patterns are mostly consistent with the experimental data, which also suggests the possibility of identifying genomic markers of drug sensitivity for novel compounds on novel cell lines. Contact: terez@pasteur.fr; ab454@ac.cam.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26351271

  16. 30 Days Wild: Development and Evaluation of a Large-Scale Nature Engagement Campaign to Improve Well-Being

    PubMed Central

    Richardson, Miles; Cormack, Adam; McRobert, Lucy; Underhill, Ralph

    2016-01-01

    There is a need to increase people’s engagement with and connection to nature, both for human well-being and the conservation of nature itself. In order to suggest ways for people to engage with nature and create a wider social context to normalise nature engagement, The Wildlife Trusts developed a mass engagement campaign, 30 Days Wild. The campaign asked people to engage with nature every day for a month. 12,400 people signed up for 30 Days Wild via an online sign-up with an estimated 18,500 taking part overall, resulting in an estimated 300,000 engagements with nature by participants. Samples of those taking part were found to have sustained increases in happiness, health, connection to nature and pro-nature behaviours. With the improvement in health being predicted by the improvement in happiness, this relationship was mediated by the change in connection to nature. PMID:26890891

  17. 30 Days Wild: Development and Evaluation of a Large-Scale Nature Engagement Campaign to Improve Well-Being.

    PubMed

    Richardson, Miles; Cormack, Adam; McRobert, Lucy; Underhill, Ralph

    2016-01-01

    There is a need to increase people's engagement with and connection to nature, both for human well-being and the conservation of nature itself. In order to suggest ways for people to engage with nature and create a wider social context to normalise nature engagement, The Wildlife Trusts developed a mass engagement campaign, 30 Days Wild. The campaign asked people to engage with nature every day for a month. 12,400 people signed up for 30 Days Wild via an online sign-up with an estimated 18,500 taking part overall, resulting in an estimated 300,000 engagements with nature by participants. Samples of those taking part were found to have sustained increases in happiness, health, connection to nature and pro-nature behaviours. With the improvement in health being predicted by the improvement in happiness, this relationship was mediated by the change in connection to nature. PMID:26890891

  18. 30 Days Wild: Development and Evaluation of a Large-Scale Nature Engagement Campaign to Improve Well-Being.

    PubMed

    Richardson, Miles; Cormack, Adam; McRobert, Lucy; Underhill, Ralph

    2016-01-01

    There is a need to increase people's engagement with and connection to nature, both for human well-being and the conservation of nature itself. In order to suggest ways for people to engage with nature and create a wider social context to normalise nature engagement, The Wildlife Trusts developed a mass engagement campaign, 30 Days Wild. The campaign asked people to engage with nature every day for a month. 12,400 people signed up for 30 Days Wild via an online sign-up with an estimated 18,500 taking part overall, resulting in an estimated 300,000 engagements with nature by participants. Samples of those taking part were found to have sustained increases in happiness, health, connection to nature and pro-nature behaviours. With the improvement in health being predicted by the improvement in happiness, this relationship was mediated by the change in connection to nature.

  19. Improved large-scale hydrological modelling through the assimilation of streamflow and downscaled satellite soil moisture observations

    NASA Astrophysics Data System (ADS)

    López López, Patricia; Wanders, Niko; Schellekens, Jaap; Renzullo, Luigi J.; Sutanudjaja, Edwin H.; Bierkens, Marc F. P.

    2016-07-01

    The coarse spatial resolution of global hydrological models (typically >  0.25°) limits their ability to resolve key water balance processes for many river basins and thus compromises their suitability for water resources management, especially when compared to locally tuned river models. A possible solution to the problem may be to drive the coarse-resolution models with locally available high-spatial-resolution meteorological data as well as to assimilate ground-based and remotely sensed observations of key water cycle variables. While this would improve the resolution of the global model, the impact of prediction accuracy remains largely an open question. In this study, we investigate the impact of assimilating streamflow and satellite soil moisture observations on the accuracy of global hydrological model estimations, when driven by either coarse- or high-resolution meteorological observations in the Murrumbidgee River basin in Australia. To this end, a 0.08° resolution version of the PCR-GLOBWB global hydrological model is forced with downscaled global meteorological data (downscaled from 0.5° to 0.08° resolution) obtained from the WATCH Forcing Data methodology applied to ERA-Interim (WFDEI) and a local high-resolution, gauging-station-based gridded data set (0.05°). Downscaled satellite-derived soil moisture (downscaled from ˜  0.5° to 0.08° resolution) from the remote observation system AMSR-E and streamflow observations collected from 23 gauging stations are assimilated using an ensemble Kalman filter. Several scenarios are analysed to explore the added value of data assimilation considering both local and global meteorological data. Results show that the assimilation of soil moisture observations results in the largest improvement of the model estimates of streamflow. The joint assimilation of both streamflow and downscaled soil moisture observations leads to further improvement in streamflow simulations (20 % reduction in RMSE

  20. Improved Redirection with Distractors: A Large-Scale-Real-Walking Locomotion Interface and its Effect on Navigation in Virtual Environments

    PubMed Central

    Peck, Tabitha C.; Fuchs, Henry; Whitton, Mary C.

    2014-01-01

    Users in virtual environments often find navigation more difficult than in the real world. Our new locomotion interface, Improved Redirection with Distractors (IRD), enables users to walk in larger-than-tracked space VEs without predefined waypoints. We compared IRD to the current best interface, really walking, by conducting a user study measuring navigational ability. Our results show that IRD users can really walk through VEs that are larger than the tracked space and can point to targets and complete maps of VEs no worse than when really walking. PMID:25429369

  1. Large-scale Manufacturing of Nanoparticulate-based Lubrication Additives for Improved Energy Efficiency and Reduced Emissions

    SciTech Connect

    Erdemir, Ali

    2013-09-26

    This project was funded under the Department of Energy (DOE) Lab Call on Nanomanufacturing for Energy Efficiency and was directed toward the development of novel boron-based nanocolloidal lubrication additives for improving the friction and wear performance of machine components in a wide range of industrial and transportation applications. Argonne's research team concentrated on the scientific and technical aspects of the project, using a range of state-of-the art analytical and tribological test facilities. Argonne has extensive past experience and expertise in working with boron-based solid and liquid lubrication additives, and has intellectual property ownership of several. There were two industrial collaborators in this project: Ashland Oil (represented by its Valvoline subsidiary) and Primet Precision Materials, Inc. (a leading nanomaterials company). There was also a sub-contract with the University of Arkansas. The major objectives of the project were to develop novel boron-based nanocolloidal lubrication additives and to optimize and verify their performance under boundary-lubricated sliding conditions. The project also tackled problems related to colloidal dispersion, larger-scale manufacturing and blending of nano-additives with base carrier oils. Other important issues dealt with in the project were determination of the optimum size and concentration of the particles and compatibility with various base fluids and/or additives. Boron-based particulate additives considered in this project included boric acid (H{sub 3}BO{sub 3}), hexagonal boron nitride (h-BN), boron oxide, and borax. As part of this project, we also explored a hybrid MoS{sub 2} + boric acid formulation approach for more effective lubrication and reported the results. The major motivation behind this work was to reduce energy losses related to friction and wear in a wide spectrum of mechanical systems and thereby reduce our dependence on imported oil. Growing concern over greenhouse gas

  2. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    NASA Astrophysics Data System (ADS)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  3. Conformational and thermal stability improvements for the large-scale production of yeast-derived rabbit hemorrhagic disease virus-like particles as multipurpose vaccine.

    PubMed

    Fernández, Erlinda; Toledo, Jorge R; Méndez, Lídice; González, Nemecio; Parra, Francisco; Martín-Alonso, José M; Limonta, Miladys; Sánchez, Kosara; Cabrales, Ania; Estrada, Mario P; Rodríguez-Mallón, Alina; Farnós, Omar

    2013-01-01

    Recombinant virus-like particles (VLP) antigenically similar to rabbit hemorrhagic disease virus (RHDV) were recently expressed at high levels inside Pichia pastoris cells. Based on the potential of RHDV VLP as platform for diverse vaccination purposes we undertook the design, development and scale-up of a production process. Conformational and stability issues were addressed to improve process control and optimization. Analyses on the structure, morphology and antigenicity of these multimers were carried out at different pH values during cell disruption and purification by size-exclusion chromatography. Process steps and environmental stresses in which aggregation or conformational instability can be detected were included. These analyses revealed higher stability and recoveries of properly assembled high-purity capsids at acidic and neutral pH in phosphate buffer. The use of stabilizers during long-term storage in solution showed that sucrose, sorbitol, trehalose and glycerol acted as useful aggregation-reducing agents. The VLP emulsified in an oil-based adjuvant were subjected to accelerated thermal stress treatments. None to slight variations were detected in the stability of formulations and in the structure of recovered capsids. A comprehensive analysis on scale-up strategies was accomplished and a nine steps large-scale production process was established. VLP produced after chromatographic separation protected rabbits against a lethal challenge. The minimum protective dose was identified. Stabilized particles were ultimately assayed as carriers of a foreign viral epitope from another pathogen affecting a larger animal species. For that purpose, a linear protective B-cell epitope from Classical Swine Fever Virus (CSFV) E2 envelope protein was chemically coupled to RHDV VLP. Conjugates were able to present the E2 peptide fragment for immune recognition and significantly enhanced the peptide-specific antibody response in vaccinated pigs. Overall these results

  4. Large-scale combining signals from both biomedical literature and the FDA Adverse Event Reporting System (FAERS) to improve post-marketing drug safety signal detection

    PubMed Central

    2014-01-01

    Background Independent data sources can be used to augment post-marketing drug safety signal detection. The vast amount of publicly available biomedical literature contains rich side effect information for drugs at all clinical stages. In this study, we present a large-scale signal boosting approach that combines over 4 million records in the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) and over 21 million biomedical articles. Results The datasets are comprised of 4,285,097 records from FAERS and 21,354,075 MEDLINE articles. We first extracted all drug-side effect (SE) pairs from FAERS. Our study implemented a total of seven signal ranking algorithms. We then compared these different ranking algorithms before and after they were boosted with signals from MEDLINE sentences or abstracts. Finally, we manually curated all drug-cardiovascular (CV) pairs that appeared in both data sources and investigated whether our approach can detect many true signals that have not been included in FDA drug labels. We extracted a total of 2,787,797 drug-SE pairs from FAERS with a low initial precision of 0.025. The ranking algorithm combined signals from both FAERS and MEDLINE, significantly improving the precision from 0.025 to 0.371 for top-ranked pairs, representing a 13.8 fold elevation in precision. We showed by manual curation that drug-SE pairs that appeared in both data sources were highly enriched with true signals, many of which have not yet been included in FDA drug labels. Conclusions We have developed an efficient and effective drug safety signal ranking and strengthening approach We demonstrate that large-scale combining information from FAERS and biomedical literature can significantly contribute to drug safety surveillance. PMID:24428898

  5. Evaluating a Large-Scale Community-Based Intervention to Improve Pregnancy and Newborn Health Among the Rural Poor in India.

    PubMed

    Acharya, Arnab; Lalwani, Tanya; Dutta, Rahul; Rajaratnam, Julie Knoll; Ruducha, Jenny; Varkey, Leila Caleb; Wunnava, Sita; Menezes, Lysander; Taylor, Catharine; Bernson, Jeff

    2015-01-01

    Objectives. We evaluated the effectiveness of the Sure Start project, which was implemented in 7 districts of Uttar Pradesh, India, to improve maternal and newborn health. Methods. Interventions were implemented at 2 randomly assigned levels of intensity. Forty percent of the areas received a more intense intervention, including community-level meetings with expectant mothers. A baseline survey consisted of 12 000 women who completed pregnancy in 2007; a follow-up survey was conducted for women in 2010 in the same villages. Our quantitative analyses provide an account of the project's impact. Results. We observed significant health improvements in both intervention areas over time; in the more intensive intervention areas, we found greater improvements in care-seeking and healthy behaviors. The more intensive intervention areas did not experience a significantly greater decline in neonatal mortality. Conclusions. This study demonstrates that community-based efforts, especially mothers' group meetings designed to increase care-seeking and healthy behaviors, are effective and can be implemented at large scale. PMID:25393175

  6. Health risks from large-scale water pollution: Current trends and implications for improving drinking water quality in the lower Amu Darya drainage basin, Uzbekistan

    NASA Astrophysics Data System (ADS)

    Törnqvist, Rebecka; Jarsjö, Jerker

    2010-05-01

    Safe drinking water is a primary prerequisite to human health, well being and development. Yet, there are roughly one billion people around the world that lack access to safe drinking water supply. Health risk assessments are effective for evaluating the suitability of using various water sources as drinking water supply. Additionally, knowledge of pollutant transport processes on relatively large scales is needed to identify effective management strategies for improving water resources of poor quality. The lower Amu Darya drainage basin close to the Aral Sea in Uzbekistan suffers from physical water scarcity and poor water quality. This is mainly due to the intensive agriculture production in the region, which requires extensive freshwater withdrawals and use of fertilizers and pesticides. In addition, recurrent droughts in the region affect the surface water availability. On average 20% of the population in rural areas in Uzbekistan lack access to improved drinking water sources, and the situation is even more severe in the lower Amu Darya basin. In this study, we consider health risks related to water-borne contaminants by dividing measured substance concentrations with health-risk based guideline values from the World Health Organisation (WHO). In particular, we analyse novel results of water quality measurements performed in 2007 and 2008 in the Mejdurechye Reservoir (located in the downstream part of the Amu Darya river basin). We furthermore identify large-scale trends by comparing the Mejdurechye results to reported water quality results from a considerable stretch of the Amu Darya river basin, including drainage water, river water and groundwater. The results show that concentrations of cadmium and nitrite exceed the WHO health-risk based guideline values in Mejdurechye Reservoir. Furthermore, concentrations of the since long ago banned and highly toxic pesticides dichlorodiphenyltrichloroethane (DDT) and γ-hexachlorocyclohexane (γ-HCH) were detected in

  7. Exposure to Large-Scale Social and Behavior Change Communication Interventions Is Associated with Improvements in Infant and Young Child Feeding Practices in Ethiopia

    PubMed Central

    Rawat, Rahul; Mwangi, Edina M.; Tesfaye, Roman; Abebe, Yewelsew; Baker, Jean; Frongillo, Edward A.; Ruel, Marie T.; Menon, Purnima

    2016-01-01

    Optimal breastfeeding (BF) practices in Ethiopia are far below the government’s targets, and complementary feeding practices are poor. The Alive & Thrive initiative aimed to improve infant and young child feeding (IYCF) practices through large-scale implementation of social and behavior change communication interventions in four regions of Ethiopia. The study assessed the effects of the interventions on IYCF practices and anthropometry over time in two regions–Southern Nations, Nationalities and Peoples Region and Tigray. A pre- and post-intervention adequacy evaluation design was used; repeated cross-sectional surveys of households with children aged 0–23.9 mo (n = 1481 and n = 1494) and with children aged 24–59.9 mo (n = 1481 and n = 1475) were conducted at baseline (2010) and endline (2014), respectively. Differences in outcomes over time were estimated using regression models, accounting for clustering and covariates. Plausibility analyses included tracing recall of key messages and promoted foods and dose-response analyses. We observed improvements in most WHO-recommended IYCF indicators. Early BF initiation and exclusive BF increased by 13.7 and 9.4 percentage points (pp), respectively. Differences for timely introduction of complementary foods, minimum dietary diversity (MDD), minimum meal frequency (MMF), minimum acceptable diet (MAD), and consumption of iron-rich foods were 22.2, 3.3, 26.2, 3.5, and 2.7 pp, respectively. Timely introduction and intake of foods promoted by the interventions improved significantly, but anthropometric outcomes did not. We also observed a dose-response association between health post visits and early initiation of BF (OR: 1.8); higher numbers of home visits by community volunteers and key messages recalled were associated with 1.8–4.4 times greater odds of achieving MDD, MMF, and MAD, and higher numbers of radio spots heard were associated with 3 times greater odds of achieving MDD and MAD. The interventions were

  8. Development of a large-scale chemogenomics database to improve drug candidate selection and to understand mechanisms of chemical toxicity and action.

    PubMed

    Ganter, Brigitte; Tugendreich, Stuart; Pearson, Cecelia I; Ayanoglu, Eser; Baumhueter, Susanne; Bostian, Keith A; Brady, Lindsay; Browne, Leslie J; Calvin, John T; Day, Gwo-Jen; Breckenridge, Naiomi; Dunlea, Shane; Eynon, Barrett P; Furness, L Mike; Ferng, Joe; Fielden, Mark R; Fujimoto, Susan Y; Gong, Li; Hu, Christopher; Idury, Radha; Judo, Michael S B; Kolaja, Kyle L; Lee, May D; McSorley, Christopher; Minor, James M; Nair, Ramesh V; Natsoulis, Georges; Nguyen, Peter; Nicholson, Simone M; Pham, Hang; Roter, Alan H; Sun, Dongxu; Tan, Siqi; Thode, Silke; Tolley, Alexander M; Vladimirova, Antoaneta; Yang, Jian; Zhou, Zhiming; Jarnagin, Kurt

    2005-09-29

    Successful drug discovery requires accurate decision making in order to advance the best candidates from initial lead identification to final approval. Chemogenomics, the use of genomic tools in pharmacology and toxicology, offers a promising enhancement to traditional methods of target identification/validation, lead identification, efficacy evaluation, and toxicity assessment. To realize the value of chemogenomics information, a contextual database is needed to relate the physiological outcomes induced by diverse compounds to the gene expression patterns measured in the same animals. Massively parallel gene expression characterization coupled with traditional assessments of drug candidates provides additional, important mechanistic information, and therefore a means to increase the accuracy of critical decisions. A large-scale chemogenomics database developed from in vivo treated rats provides the context and supporting data to enhance and accelerate accurate interpretation of mechanisms of toxicity and pharmacology of chemicals and drugs. To date, approximately 600 different compounds, including more than 400 FDA approved drugs, 60 drugs approved in Europe and Japan, 25 withdrawn drugs, and 100 toxicants, have been profiled in up to 7 different tissues of rats (representing over 3200 different drug-dose-time-tissue combinations). Accomplishing this task required evaluating and improving a number of in vivo and microarray protocols, including over 80 rigorous quality control steps. The utility of pairing clinical pathology assessments with gene expression data is illustrated using three anti-neoplastic drugs: carmustine, methotrexate, and thioguanine, which had similar effects on the blood compartment, but diverse effects on hepatotoxicity. We will demonstrate that gene expression events monitored in the liver can be used to predict pathological events occurring in that tissue as well as in hematopoietic tissues.

  9. Immunological metagene signatures derived from immunogenic cancer cell death associate with improved survival of patients with lung, breast or ovarian malignancies: A large-scale meta-analysis

    PubMed Central

    Garg, Abhishek D.; De Ruysscher, Dirk; Agostinis, Patrizia

    2016-01-01

    ABSTRACT The emerging role of the cancer cell-immune cell interface in shaping tumorigenesis/anticancer immunotherapy has increased the need to identify prognostic biomarkers. Henceforth, our primary aim was to identify the immunogenic cell death (ICD)-derived metagene signatures in breast, lung and ovarian cancer that associate with improved patient survival. To this end, we analyzed the prognostic impact of differential gene-expression of 33 pre-clinically-validated ICD-parameters through a large-scale meta-analysis involving 3,983 patients (‘discovery’ dataset) across lung (1,432), breast (1,115) and ovarian (1,436) malignancies. The main results were also substantiated in ‘validation’ datasets consisting of 818 patients of same cancer-types (i.e. 285 breast/274 lung/259 ovarian). The ICD-associated parameters exhibited a highly-clustered and largely cancer type-specific prognostic impact. Interestingly, we delineated ICD-derived consensus-metagene signatures that exhibited a positive prognostic impact that was either cancer type-independent or specific. Importantly, most of these ICD-derived consensus-metagenes (acted as attractor-metagenes and thereby) ‘attracted’ highly co-expressing sets of genes or convergent-metagenes. These convergent-metagenes also exhibited positive prognostic impact in respective cancer types. Remarkably, we found that the cancer type-independent consensus-metagene acted as an ‘attractor’ for cancer-specific convergent-metagenes. This reaffirms that the immunological prognostic landscape of cancer tends to segregate between cancer-independent and cancer-type specific gene signatures. Moreover, this prognostic landscape was largely dominated by the classical T cell activity/infiltration/function-related biomarkers. Interestingly, each cancer type tended to associate with biomarkers representing a specific T cell activity or function rather than pan-T cell biomarkers. Thus, our analysis confirms that ICD can serve as a

  10. The Use of Qualitative Methods in Large-Scale Evaluation: Improving the Quality of the Evaluation and the Meaningfulness of the Findings

    ERIC Educational Resources Information Center

    Slayton, Julie; Llosa, Lorena

    2005-01-01

    In light of the current debate over the meaning of "scientifically based research", we argue that qualitative methods should be an essential part of large-scale program evaluations if program effectiveness is to be determined and understood. This article chronicles the challenges involved in incorporating qualitative methods into the large-scale…

  11. Improved parameterization of marine ice dynamics and flow instabilities for simulation of the Austfonna ice cap using a large-scale ice sheet model

    NASA Astrophysics Data System (ADS)

    Dunse, T.; Greve, R.; Schuler, T.; Hagen, J. M.; Navarro, F.; Vasilenko, E.; Reijmer, C.

    2009-12-01

    The Austfonna ice cap covers an area of 8120 km2 and is by far the largest glacier on Svalbard. Almost 30% of the entire area is grounded below sea-level, while the figure is as large as 57% for the known surge-type basins in particular. Marine ice dynamics, as well as flow instabilities presumably control flow regime, form and evolution of Austfonna. These issues are our focus in numerical simulations of the ice cap. We employ the thermodynamic, large-scale ice sheet model SICOPOLIS (http://sicopolis.greveweb.net/) which is based on the shallow-ice approximation. We present improved parameterizations of (a) the marine extent and calving and (b) processes that may initiate flow instabilities such as switches from cold to temperate basal conditions, surface steepening and hence, increases in driving stress, enhanced sliding or deformation of unconsolidated marine sediments and diminishing ice thicknesses towards flotation thickness. Space-borne interferometric snapshots of Austfonna revealed a velocity structure of a slow moving polar ice cap (< 10m/a) interrupted by distinct fast flow units with velocities in excess of 100m/a. However, observations of flow variability are scarce. In spring 2008, we established a series of stakes along the centrelines of two fast-flowing units. Repeated DGPS and continuous GPS measurements of the stake positions give insight in the temporal flow variability of these units and provide constrains to the modeled surface velocity field. Austfonna’s thermal structure is described as polythermal. However, direct measurements of the temperature distribution is available only from one single borehole at the summit area. The vertical temperature profile shows that the bulk of the 567m thick ice column is cold, only underlain by a thin temperate basal layer of approximately 20m. To acquire a spatially extended picture of the thermal structure (and bed topography), we used low-frequency (20 MHz) GPR profiling across the ice cap and the

  12. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-01

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling

  13. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  14. Improving HIV proteome annotation: new features of BioAfrica HIV Proteomics Resource

    PubMed Central

    Druce, Megan; Hulo, Chantal; Masson, Patrick; Sommer, Paula; Xenarios, Ioannis; Le Mercier, Philippe; De Oliveira, Tulio

    2016-01-01

    The Human Immunodeficiency Virus (HIV) is one of the pathogens that cause the greatest global concern, with approximately 35 million people currently infected with HIV. Extensive HIV research has been performed, generating a large amount of HIV and host genomic data. However, no effective vaccine that protects the host from HIV infection is available and HIV is still spreading at an alarming rate, despite effective antiretroviral (ARV) treatment. In order to develop effective therapies, we need to expand our knowledge of the interaction between HIV and host proteins. In contrast to virus proteins, which often rapidly evolve drug resistance mutations, the host proteins are essentially invariant within all humans. Thus, if we can identify the host proteins needed for virus replication, such as those involved in transporting viral proteins to the cell surface, we have a chance of interrupting viral replication. There is no proteome resource that summarizes this interaction, making research on this subject a difficult enterprise. In order to fill this gap in knowledge, we curated a resource presents detailed annotation on the interaction between the HIV proteome and host proteins. Our resource was produced in collaboration with ViralZone and used manual curation techniques developed by UniProtKB/Swiss-Prot. Our new website also used previous annotations of the BioAfrica HIV-1 Proteome Resource, which has been accessed by approximately 10 000 unique users a year since its inception in 2005. The novel features include a dedicated new page for each HIV protein, a graphic display of its function and a section on its interaction with host proteins. Our new webpages also add information on the genomic location of each HIV protein and the position of ARV drug resistance mutations. Our improved BioAfrica HIV-1 Proteome Resource fills a gap in the current knowledge of biocuration. Database URL: http://www.bioafrica.net/proteomics/HIVproteome.html PMID:27087306

  15. Improving HIV proteome annotation: new features of BioAfrica HIV Proteomics Resource.

    PubMed

    Druce, Megan; Hulo, Chantal; Masson, Patrick; Sommer, Paula; Xenarios, Ioannis; Le Mercier, Philippe; De Oliveira, Tulio

    2016-01-01

    The Human Immunodeficiency Virus (HIV) is one of the pathogens that cause the greatest global concern, with approximately 35 million people currently infected with HIV. Extensive HIV research has been performed, generating a large amount of HIV and host genomic data. However, no effective vaccine that protects the host from HIV infection is available and HIV is still spreading at an alarming rate, despite effective antiretroviral (ARV) treatment. In order to develop effective therapies, we need to expand our knowledge of the interaction between HIV and host proteins. In contrast to virus proteins, which often rapidly evolve drug resistance mutations, the host proteins are essentially invariant within all humans. Thus, if we can identify the host proteins needed for virus replication, such as those involved in transporting viral proteins to the cell surface, we have a chance of interrupting viral replication. There is no proteome resource that summarizes this interaction, making research on this subject a difficult enterprise. In order to fill this gap in knowledge, we curated a resource presents detailed annotation on the interaction between the HIV proteome and host proteins. Our resource was produced in collaboration with ViralZone and used manual curation techniques developed by UniProtKB/Swiss-Prot. Our new website also used previous annotations of the BioAfrica HIV-1 Proteome Resource, which has been accessed by approximately 10 000 unique users a year since its inception in 2005. The novel features include a dedicated new page for each HIV protein, a graphic display of its function and a section on its interaction with host proteins. Our new webpages also add information on the genomic location of each HIV protein and the position of ARV drug resistance mutations. Our improved BioAfrica HIV-1 Proteome Resource fills a gap in the current knowledge of biocuration.Database URL:http://www.bioafrica.net/proteomics/HIVproteome.html. PMID:27087306

  16. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  17. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  18. Large Scale Chemical Cross-linking Mass Spectrometry Perspectives.

    PubMed

    Zybailov, Boris L; Glazko, Galina V; Jaiswal, Mihir; Raney, Kevin D

    2013-02-01

    The spectacular heterogeneity of a complex protein mixture from biological samples becomes even more difficult to tackle when one's attention is shifted towards different protein complex topologies, transient interactions, or localization of PPIs. Meticulous protein-by-protein affinity pull-downs and yeast-two-hybrid screens are the two approaches currently used to decipher proteome-wide interaction networks. Another method is to employ chemical cross-linking, which gives not only identities of interactors, but could also provide information on the sites of interactions and interaction interfaces. Despite significant advances in mass spectrometry instrumentation over the last decade, mapping Protein-Protein Interactions (PPIs) using chemical cross-linking remains time consuming and requires substantial expertise, even in the simplest of systems. While robust methodologies and software exist for the analysis of binary PPIs and also for the single protein structure refinement using cross-linking-derived constraints, undertaking a proteome-wide cross-linking study is highly complex. Difficulties include i) identifying cross-linkers of the right length and selectivity that could capture interactions of interest; ii) enrichment of the cross-linked species; iii) identification and validation of the cross-linked peptides and cross-linked sites. In this review we examine existing literature aimed at the large-scale protein cross-linking and discuss possible paths for improvement. We also discuss short-length cross-linkers of broad specificity such as formaldehyde and diazirine-based photo-cross-linkers. These cross-linkers could potentially capture many types of interactions, without strict requirement for a particular amino-acid to be present at a given protein-protein interface. How these shortlength, broad specificity cross-linkers be applied to proteome-wide studies? We will suggest specific advances in methodology, instrumentation and software that are needed to make

  19. USING PROTEOMICS TO IMPROVE RISK ASSESSMENT OF HUMAN EXPOSURE TO ENVIRONMENTAL AGENTS

    EPA Science Inventory

    Using Proteomics to Improve Risk Assessment of Human Exposure to Environmental Agents.
    Authors: Witold M. Winnik
    Key Words (4): Proteomics, LC/MS, Western Blots, 1D and 2D gel electrophoresis, toxicity

    The goal of this project is to use proteomics for the character...

  20. Proteomics and Metabolomics: Two Emerging Areas for Legume Improvement.

    PubMed

    Ramalingam, Abirami; Kudapa, Himabindu; Pazhamala, Lekha T; Weckwerth, Wolfram; Varshney, Rajeev K

    2015-01-01

    The crop legumes such as chickpea, common bean, cowpea, peanut, pigeonpea, soybean, etc. are important sources of nutrition and contribute to a significant amount of biological nitrogen fixation (>20 million tons of fixed nitrogen) in agriculture. However, the production of legumes is constrained due to abiotic and biotic stresses. It is therefore imperative to understand the molecular mechanisms of plant response to different stresses and identify key candidate genes regulating tolerance which can be deployed in breeding programs. The information obtained from transcriptomics has facilitated the identification of candidate genes for the given trait of interest and utilizing them in crop breeding programs to improve stress tolerance. However, the mechanisms of stress tolerance are complex due to the influence of multi-genes and post-transcriptional regulations. Furthermore, stress conditions greatly affect gene expression which in turn causes modifications in the composition of plant proteomes and metabolomes. Therefore, functional genomics involving various proteomics and metabolomics approaches have been obligatory for understanding plant stress tolerance. These approaches have also been found useful to unravel different pathways related to plant and seed development as well as symbiosis. Proteome and metabolome profiling using high-throughput based systems have been extensively applied in the model legume species, Medicago truncatula and Lotus japonicus, as well as in the model crop legume, soybean, to examine stress signaling pathways, cellular and developmental processes and nodule symbiosis. Moreover, the availability of protein reference maps as well as proteomics and metabolomics databases greatly support research and understanding of various biological processes in legumes. Protein-protein interaction techniques, particularly the yeast two-hybrid system have been advantageous for studying symbiosis and stress signaling in legumes. In this review, several

  1. Proteomics and Metabolomics: Two Emerging Areas for Legume Improvement

    PubMed Central

    Ramalingam, Abirami; Kudapa, Himabindu; Pazhamala, Lekha T.; Weckwerth, Wolfram; Varshney, Rajeev K.

    2015-01-01

    The crop legumes such as chickpea, common bean, cowpea, peanut, pigeonpea, soybean, etc. are important sources of nutrition and contribute to a significant amount of biological nitrogen fixation (>20 million tons of fixed nitrogen) in agriculture. However, the production of legumes is constrained due to abiotic and biotic stresses. It is therefore imperative to understand the molecular mechanisms of plant response to different stresses and identify key candidate genes regulating tolerance which can be deployed in breeding programs. The information obtained from transcriptomics has facilitated the identification of candidate genes for the given trait of interest and utilizing them in crop breeding programs to improve stress tolerance. However, the mechanisms of stress tolerance are complex due to the influence of multi-genes and post-transcriptional regulations. Furthermore, stress conditions greatly affect gene expression which in turn causes modifications in the composition of plant proteomes and metabolomes. Therefore, functional genomics involving various proteomics and metabolomics approaches have been obligatory for understanding plant stress tolerance. These approaches have also been found useful to unravel different pathways related to plant and seed development as well as symbiosis. Proteome and metabolome profiling using high-throughput based systems have been extensively applied in the model legume species, Medicago truncatula and Lotus japonicus, as well as in the model crop legume, soybean, to examine stress signaling pathways, cellular and developmental processes and nodule symbiosis. Moreover, the availability of protein reference maps as well as proteomics and metabolomics databases greatly support research and understanding of various biological processes in legumes. Protein-protein interaction techniques, particularly the yeast two-hybrid system have been advantageous for studying symbiosis and stress signaling in legumes. In this review, several

  2. Improved enrichment and proteomic identification of outer membrane proteins from a Gram-negative bacterium: focus on Caulobacter crescentus.

    PubMed

    Cao, Yuan; Johnson, Helen M; Bazemore-Walker, Carthene R

    2012-01-01

    Efforts to characterize proteins found in the outer membrane (OM) of Gram-negative bacteria have been steadily increasing due to the promise of expanding our understanding of fundamental bacterial processes such as cell adhesion or cell wall biogenesis as well as the promise of finding potential vaccine- or drug-targets for virulent bacteria. We have developed a mass spectrometry-compatible experimental strategy that resulted in increased coverage of the OM proteome of a model organism, Caulobacter crescentus. The specificity of the OM enrichment step was improved by using detergent solubilization of the protein pellet, low-density cell culture conditions, and a surface-layer deficient cell line. Additionally, efficient gel-assisted digestion, high-resolution RP/RP-MS/MS, and rigorous bioinformatic analysis led to the identification of 234 proteins using strict identification criteria (≥ two unique peptides per protein; peptide false discovery rate <2%). Eighty-four of the detected proteins were predicted to localize to the OM or extracellular space. These results represent ~70% coverage of the predicted OM/extracellular proteome of C. crescentus. This analytical approach, which considers important experimental variables not previously explored in published OM protein studies, can be applied to other OM proteomic endeavors "as is" or with slight modification and should improve the large-scale study of this especially challenging subproteome.

  3. Optimization of filtering criterion for SEQUEST database searching to improve proteome coverage in shotgun proteomics

    PubMed Central

    Jiang, Xinning; Jiang, Xiaogang; Han, Guanghui; Ye, Mingliang; Zou, Hanfa

    2007-01-01

    Background In proteomic analysis, MS/MS spectra acquired by mass spectrometer are assigned to peptides by database searching algorithms such as SEQUEST. The assignations of peptides to MS/MS spectra by SEQUEST searching algorithm are defined by several scores including Xcorr, ΔCn, Sp, Rsp, matched ion count and so on. Filtering criterion using several above scores is used to isolate correct identifications from random assignments. However, the filtering criterion was not favorably optimized up to now. Results In this study, we implemented a machine learning approach known as predictive genetic algorithm (GA) for the optimization of filtering criteria to maximize the number of identified peptides at fixed false-discovery rate (FDR) for SEQUEST database searching. As the FDR was directly determined by decoy database search scheme, the GA based optimization approach did not require any pre-knowledge on the characteristics of the data set, which represented significant advantages over statistical approaches such as PeptideProphet. Compared with PeptideProphet, the GA based approach can achieve similar performance in distinguishing true from false assignment with only 1/10 of the processing time. Moreover, the GA based approach can be easily extended to process other database search results as it did not rely on any assumption on the data. Conclusion Our results indicated that filtering criteria should be optimized individually for different samples. The new developed software using GA provides a convenient and fast way to create tailored optimal criteria for different proteome samples to improve proteome coverage. PMID:17761002

  4. Reconstruction of Metabolic Pathways, Protein Expression, and Homeostasis Machineries across Maize Bundle Sheath and Mesophyll Chloroplasts: Large-Scale Quantitative Proteomics Using the First Maize Genome Assembly1[W][OA

    PubMed Central

    Friso, Giulia; Majeran, Wojciech; Huang, Mingshu; Sun, Qi; van Wijk, Klaas J.

    2010-01-01

    Chloroplasts in differentiated bundle sheath (BS) and mesophyll (M) cells of maize (Zea mays) leaves are specialized to accommodate C4 photosynthesis. This study provides a reconstruction of how metabolic pathways, protein expression, and homeostasis functions are quantitatively distributed across BS and M chloroplasts. This yielded new insights into cellular specialization. The experimental analysis was based on high-accuracy mass spectrometry, protein quantification by spectral counting, and the first maize genome assembly. A bioinformatics workflow was developed to deal with gene models, protein families, and gene duplications related to the polyploidy of maize; this avoided overidentification of proteins and resulted in more accurate protein quantification. A total of 1,105 proteins were assigned as potential chloroplast proteins, annotated for function, and quantified. Nearly complete coverage of primary carbon, starch, and tetrapyrole metabolism, as well as excellent coverage for fatty acid synthesis, isoprenoid, sulfur, nitrogen, and amino acid metabolism, was obtained. This showed, for example, quantitative and qualitative cell type-specific specialization in starch biosynthesis, arginine synthesis, nitrogen assimilation, and initial steps in sulfur assimilation. An extensive overview of BS and M chloroplast protein expression and homeostasis machineries (more than 200 proteins) demonstrated qualitative and quantitative differences between M and BS chloroplasts and BS-enhanced levels of the specialized chaperones ClpB3 and HSP90 that suggest active remodeling of the BS proteome. The reconstructed pathways are presented as detailed flow diagrams including annotation, relative protein abundance, and cell-specific expression pattern. Protein annotation and identification data, and projection of matched peptides on the protein models, are available online through the Plant Proteome Database. PMID:20089766

  5. Proteomics

    SciTech Connect

    Hixson, Kim K.; Lopez-Ferrer, Daniel; Robinson, Errol W.; Pasa-Tolic, Ljiljana

    2010-02-01

    Proteomics aims to characterize the spatial distribution and temporal dynamics of proteins in biological systems, the protein response to environmental stimuli, and the differences in protein states between diseased and control biological systems. Mass spectrometry (MS) plays a crucial role in enabling the analysis of proteomes and typically is the method of choice for identifying proteins present in biological systems. Peptide (and consequently protein) identifications are made by comparing measured masses to calculated values obtained from genome data. Several methodologies based on MS have been developed for the analysis of proteomes. The complexity of the biological systems requires that the proteome be separated prior to analysis. Both gel based and liquid chromatography based separations have proven very useful in this regard. Typically, separated proteins are analyzed with MS either intact (top-down proteomics) or are digested into peptides (bottom-up) prior to MS analysis. Additionally, several procedures, with and without stable isotopic labeling, have been introduced to facilitate protein quantitation (e.g. characterize changes in protein abundances between given biological states).

  6. Dynameomics: data-driven methods and models for utilizing large-scale protein structure repositories for improving fragment-based loop prediction.

    PubMed

    Rysavy, Steven J; Beck, David A C; Daggett, Valerie

    2014-11-01

    Protein function is intimately linked to protein structure and dynamics yet experimentally determined structures frequently omit regions within a protein due to indeterminate data, which is often due protein dynamics. We propose that atomistic molecular dynamics simulations provide a diverse sampling of biologically relevant structures for these missing segments (and beyond) to improve structural modeling and structure prediction. Here we make use of the Dynameomics data warehouse, which contains simulations of representatives of essentially all known protein folds. We developed novel computational methods to efficiently identify, rank and retrieve small peptide structures, or fragments, from this database. We also created a novel data model to analyze and compare large repositories of structural data, such as contained within the Protein Data Bank and the Dynameomics data warehouse. Our evaluation compares these structural repositories for improving loop predictions and analyzes the utility of our methods and models. Using a standard set of loop structures, containing 510 loops, 30 for each loop length from 4 to 20 residues, we find that the inclusion of Dynameomics structures in fragment-based methods improves the quality of the loop predictions without being dependent on sequence homology. Depending on loop length, ∼ 25-75% of the best predictions came from the Dynameomics set, resulting in lower main chain root-mean-square deviations for all fragment lengths using the combined fragment library. We also provide specific cases where Dynameomics fragments provide better predictions for NMR loop structures than fragments from crystal structures. Online access to these fragment libraries is available at http://www.dynameomics.org/fragments.

  7. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  8. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  9. A Large Scale Automatic Earthquake Location Catalog in the San Jacinto Fault Zone Area Using An Improved Shear-Wave Detection Algorithm

    NASA Astrophysics Data System (ADS)

    White, M. C. A.; Ross, Z.; Vernon, F.; Ben-Zion, Y.

    2015-12-01

    UC San Diego's ANZA network began archiving event-triggered data in 1982. As a result of improved recording technology, continuous waveform data archives are available starting in 1998. This continuous dataset, from 1998-present, represents a wealth of potential insight into spatio-temporal seismicity patterns, earthquake physics and mechanics of the San Jacinto Fault Zone. However, the volume of data renders manual analysis costly. In order to investigate the characteristics of the data in space and time, an automatic earthquake location catalog is needed. To this end, we apply standard earthquake signal processing techniques to the continuous data to detect first-arriving P-waves in combination with a recently developed S-wave detection algorithm. The resulting dataset of arrival time observations are processed using a grid association algorithm to produce initial absolute locations which are refined using a location inversion method that accounts for 3-D velocity heterogeneities. Precise relative locations are then derived from the refined absolute locations using the HypoDD double-difference algorithm. Moment magnitudes for the events are estimated from multi-taper spectral analysis. A >650% increase in the S:P pick ratio is achieved using the updated S-wave detection algorithm, when compared to the currently available catalog for the ANZA network. The increased number of S-wave observations leads to improved earthquake location accuracy and reliability (ie. less false event detections). Various aspects of spatio-temporal seismicity patterns and size distributions are investigated. Updated results will be presented at the meeting.

  10. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation

    PubMed Central

    Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-01-01

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.

  11. The utilization of gum tragacanth to improve the growth of Rhodotorula aurantiaca and the production of gamma-decalactone in large scale.

    PubMed

    Alchihab, Mohamed; Destain, Jacqueline; Aguedo, Mario; Wathelet, Jean-Paul; Thonart, Philippe

    2010-09-01

    The production of gamma-decalactone and 4-hydroxydecanoic acid by the psychrophilic yeast R. aurantiaca was studied. The effect of both compounds on the growth of R. aurantiaca was also investigated and our results show that gamma-decalactone must be one of the limiting factors for its production. The addition of gum tragacanth to the medium at concentrations of 3 and 4 g/l seems to be an adequate strategy to enhance gamma-decalactone production and to reduce its toxicity towards the cell. The production of gamma-decalactone and 4-hydroxydecanoic acid was significantly higher in 20-l bioreactor than in 100-l bioreactor. By using 20 g/l of castor oil, 6.5 and 4.5 g/l of gamma-decalactone were extracted after acidification at pH 2.0 and distillation at 100 degrees C for 45 min in 20- and 100-l bioreactors, respectively. We propose a process at industrial scale using a psychrophilic yeast to produce naturally gamma-decalactone from castor oil which acts also as a detoxifying agent; moreover the process was improved by adding a natural gum.

  12. Improving AFLP analysis of large-scale patterns of genetic variation--a case study with the Central African lianas Haumania spp (Marantaceae) showing interspecific gene flow.

    PubMed

    Ley, A C; Hardy, O J

    2013-04-01

    AFLP markers are often used to study patterns of population genetic variation and gene flow because they offer a good coverage of the nuclear genome, but the reliability of AFLP scoring is critical. To assess interspecific gene flow in two African rainforest liana species (Haumania danckelmaniana, H. liebrechtsiana) where previous evidence of chloroplast captures questioned the importance of hybridization and species boundaries, we developed new AFLP markers and a novel approach to select reliable bands from their degree of reproducibility. The latter is based on the estimation of the broad-sense heritability of AFLP phenotypes, an improvement over classical scoring error rates, which showed that the polymorphism of most AFLP bands was affected by a substantial nongenetic component. Therefore, using a quantitative genetics framework, we also modified an existing estimator of pairwise kinship coefficient between individuals correcting for the limited heritability of markers. Bayesian clustering confirms the recognition of the two Haumania species. Nevertheless, the decay of the relatedness between individuals of distinct species with geographic distance demonstrates that hybridization affects the nuclear genome. In conclusion, although we showed that AFLP markers might be substantially affected by nongenetic factors, their analysis using the new methods developed considerably advanced our understanding of the pattern of gene flow in our model species. PMID:23398575

  13. Improving AFLP analysis of large-scale patterns of genetic variation--a case study with the Central African lianas Haumania spp (Marantaceae) showing interspecific gene flow.

    PubMed

    Ley, A C; Hardy, O J

    2013-04-01

    AFLP markers are often used to study patterns of population genetic variation and gene flow because they offer a good coverage of the nuclear genome, but the reliability of AFLP scoring is critical. To assess interspecific gene flow in two African rainforest liana species (Haumania danckelmaniana, H. liebrechtsiana) where previous evidence of chloroplast captures questioned the importance of hybridization and species boundaries, we developed new AFLP markers and a novel approach to select reliable bands from their degree of reproducibility. The latter is based on the estimation of the broad-sense heritability of AFLP phenotypes, an improvement over classical scoring error rates, which showed that the polymorphism of most AFLP bands was affected by a substantial nongenetic component. Therefore, using a quantitative genetics framework, we also modified an existing estimator of pairwise kinship coefficient between individuals correcting for the limited heritability of markers. Bayesian clustering confirms the recognition of the two Haumania species. Nevertheless, the decay of the relatedness between individuals of distinct species with geographic distance demonstrates that hybridization affects the nuclear genome. In conclusion, although we showed that AFLP markers might be substantially affected by nongenetic factors, their analysis using the new methods developed considerably advanced our understanding of the pattern of gene flow in our model species.

  14. Large-scale hydrological modeling for calculating water stress indices: implications of improved spatiotemporal resolution, surface-groundwater differentiation, and uncertainty characterization.

    PubMed

    Scherer, Laura; Venkatesh, Aranya; Karuppiah, Ramkumar; Pfister, Stephan

    2015-04-21

    Physical water scarcities can be described by water stress indices. These are often determined at an annual scale and a watershed level; however, such scales mask seasonal fluctuations and spatial heterogeneity within a watershed. In order to account for this level of detail, first and foremost, water availability estimates must be improved and refined. State-of-the-art global hydrological models such as WaterGAP and UNH/GRDC have previously been unable to reliably reflect water availability at the subbasin scale. In this study, the Soil and Water Assessment Tool (SWAT) was tested as an alternative to global models, using the case study of the Mississippi watershed. While SWAT clearly outperformed the global models at the scale of a large watershed, it was judged to be unsuitable for global scale simulations due to the high calibration efforts required. The results obtained in this study show that global assessments miss out on key aspects related to upstream/downstream relations and monthly fluctuations, which are important both for the characterization of water scarcity in the Mississippi watershed and for water footprints. Especially in arid regions, where scarcity is high, these models provide unsatisfying results.

  15. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  16. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  17. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  18. Rice proteomics: a model system for crop improvement and food security.

    PubMed

    Kim, Sun Tae; Kim, Sang Gon; Agrawal, Ganesh Kumar; Kikuchi, Shoshi; Rakwal, Randeep

    2014-03-01

    Rice proteomics has progressed at a tremendous pace since the year 2000, and that has resulted in establishing and understanding the proteomes of tissues, organs, and organelles under both normal and abnormal (adverse) environmental conditions. Established proteomes have also helped in re-annotating the rice genome and revealing the new role of previously known proteins. The progress of rice proteomics had recognized it as the corner/stepping stone for at least cereal crops. Rice proteomics remains a model system for crops as per its exemplary proteomics research. Proteomics-based discoveries in rice are likely to be translated in improving crop plants and vice versa against ever-changing environmental factors. This review comprehensively covers rice proteomics studies from August 2010 to July 2013, with major focus on rice responses to diverse abiotic (drought, salt, oxidative, temperature, nutrient, hormone, metal ions, UV radiation, and ozone) as well as various biotic stresses, especially rice-pathogen interactions. The differentially regulated proteins in response to various abiotic stresses in different tissues have also been summarized, indicating key metabolic and regulatory pathways. We envision a significant role of rice proteomics in addressing the global ground level problem of food security, to meet the demands of the human population which is expected to reach six to nine billion by 2040.

  19. Quality assurance and quality improvement using supportive supervision in a large-scale STI intervention with sex workers, men who have sex with men/transgenders and injecting-drug users in India

    PubMed Central

    Wi, Teodora C; Das, Anjana; Kane, Sumit; Singh, Aman Kumar; George, Bitra; Steen, Richard

    2010-01-01

    Background Documentation of the long-term impact of supportive supervision using a monitoring tool in STI intervention with sex workers, men who have sex with men and injection-drug users is limited. The authors report methods and results of continued quality monitoring in a large-scale STI services provided as a part of a broader HIV-prevention package in six Indian states under Avahan, the India AIDS Initiative. Methodology Guidelines and standards for STI services, and a supportive supervisory tool to monitor the quality were developed for providing technical support to STI component of large-scale HIV-prevention intervention through 372 project-supported STI clinics. The tool contained 80 questions to track the quality of STI services provided on a five-point scoring scale in five performance areas: coverage, quality of clinic and services, referral networks, community involvement and technical support. Results The tool was applied to different STI clinics during supportive supervision visits conducted once in every 3 months to assess quality, give immediate feedback and develop a quality score. A total of 292 clinics managed by seven lead implementing partners in six Indian states were covered in 15 quarters over 45 months. Overall quality indicators for the five performance areas showed a three- to sevenfold improvement over the period. Conclusion It was possible to improve quality over the long-term in STI interventions for sex workers, men who have sex with men and injection-drug users using an interactive and comprehensive supportive supervision tool which gives on-the-spot feedback. However, such an effort is time-consuming and resource-intensive, and needs a structured approach. PMID:20167739

  20. Molecular Biologist's Guide to Proteomics

    PubMed Central

    Graves, Paul R.; Haystead, Timothy A. J.

    2002-01-01

    The emergence of proteomics, the large-scale analysis of proteins, has been inspired by the realization that the final product of a gene is inherently more complex and closer to function than the gene itself. Shortfalls in the ability of bioinformatics to predict both the existence and function of genes have also illustrated the need for protein analysis. Moreover, only through the study of proteins can posttranslational modifications be determined, which can profoundly affect protein function. Proteomics has been enabled by the accumulation of both DNA and protein sequence databases, improvements in mass spectrometry, and the development of computer algorithms for database searching. In this review, we describe why proteomics is important, how it is conducted, and how it can be applied to complement other existing technologies. We conclude that currently, the most practical application of proteomics is the analysis of target proteins as opposed to entire proteomes. This type of proteomics, referred to as functional proteomics, is always driven by a specific biological question. In this way, protein identification and characterization has a meaningful outcome. We discuss some of the advantages of a functional proteomics approach and provide examples of how different methodologies can be utilized to address a wide variety of biological problems. PMID:11875127

  1. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  2. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  3. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  4. Advances in plant proteomics toward improvement of crop productivity and stress resistancex

    PubMed Central

    Hu, Junjie; Rampitsch, Christof; Bykova, Natalia V.

    2015-01-01

    Abiotic and biotic stresses constrain plant growth and development negatively impacting crop production. Plants have developed stress-specific adaptations as well as simultaneous responses to a combination of various abiotic stresses with pathogen infection. The efficiency of stress-induced adaptive responses is dependent on activation of molecular signaling pathways and intracellular networks by modulating expression, or abundance, and/or post-translational modification (PTM) of proteins primarily associated with defense mechanisms. In this review, we summarize and evaluate the contribution of proteomic studies to our understanding of stress response mechanisms in different plant organs and tissues. Advanced quantitative proteomic techniques have improved the coverage of total proteomes and sub-proteomes from small amounts of starting material, and characterized PTMs as well as protein–protein interactions at the cellular level, providing detailed information on organ- and tissue-specific regulatory mechanisms responding to a variety of individual stresses or stress combinations during plant life cycle. In particular, we address the tissue-specific signaling networks localized to various organelles that participate in stress-related physiological plasticity and adaptive mechanisms, such as photosynthetic efficiency, symbiotic nitrogen fixation, plant growth, tolerance and common responses to environmental stresses. We also provide an update on the progress of proteomics with major crop species and discuss the current challenges and limitations inherent to proteomics techniques and data interpretation for non-model organisms. Future directions in proteomics research toward crop improvement are further discussed. PMID:25926838

  5. Clinical proteomics and OMICS clues useful in translational medicine research

    PubMed Central

    2012-01-01

    Since the advent of the new proteomics era more than a decade ago, large-scale studies of protein profiling have been used to identify distinctive molecular signatures in a wide array of biological systems, spanning areas of basic biological research, clinical diagnostics, and biomarker discovery directed toward therapeutic applications. Recent advances in protein separation and identification techniques have significantly improved proteomic approaches, leading to enhancement of the depth and breadth of proteome coverage. Proteomic signatures, specific for multiple diseases, including cancer and pre-invasive lesions, are emerging. This article combines, in a simple manner, relevant proteomic and OMICS clues used in the discovery and development of diagnostic and prognostic biomarkers that are applicable to all clinical fields, thus helping to improve applications of clinical proteomic strategies for translational medicine research. PMID:22642823

  6. Improved recovery and identification of membrane proteins from rat hepatic cells using a centrifugal proteomic reactor.

    PubMed

    Zhou, Hu; Wang, Fangjun; Wang, Yuwei; Ning, Zhibin; Hou, Weimin; Wright, Theodore G; Sundaram, Meenakshi; Zhong, Shumei; Yao, Zemin; Figeys, Daniel

    2011-10-01

    Despite their importance in many biological processes, membrane proteins are underrepresented in proteomic analysis because of their poor solubility (hydrophobicity) and often low abundance. We describe a novel approach for the identification of plasma membrane proteins and intracellular microsomal proteins that combines membrane fractionation, a centrifugal proteomic reactor for streamlined protein extraction, protein digestion and fractionation by centrifugation, and high performance liquid chromatography-electrospray ionization-tandem MS. The performance of this approach was illustrated for the study of the proteome of ER and Golgi microsomal membranes in rat hepatic cells. The centrifugal proteomic reactor identified 945 plasma membrane proteins and 955 microsomal membrane proteins, of which 63 and 47% were predicted as bona fide membrane proteins, respectively. Among these proteins, >800 proteins were undetectable by the conventional in-gel digestion approach. The majority of the membrane proteins only identified by the centrifugal proteomic reactor were proteins with ≥ 2 transmembrane segments or proteins with high molecular mass (e.g. >150 kDa) and hydrophobicity. The improved proteomic reactor allowed the detection of a group of endocytic and/or signaling receptor proteins on the plasma membrane, as well as apolipoproteins and glycerolipid synthesis enzymes that play a role in the assembly and secretion of apolipoprotein B100-containing very low density lipoproteins. Thus, the centrifugal proteomic reactor offers a new analytical tool for structure and function studies of membrane proteins involved in lipid and lipoprotein metabolism.

  7. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  8. Characterization of quinoa seed proteome combining different protein precipitation techniques: Improvement of knowledge of nonmodel plant proteomics.

    PubMed

    Capriotti, Anna Laura; Cavaliere, Chiara; Piovesana, Susy; Stampachiacchiere, Serena; Ventura, Salvatore; Zenezini Chiozzi, Riccardo; Laganà, Aldo

    2015-03-01

    A shotgun proteomics approach was used to characterize the quinoa seed proteome. To obtain comprehensive proteomic data from quinoa seeds three different precipitation procedures were employed: MeOH/CHCl3 /double-distilled H2 O, acetone either alone or with trichloroacetic acid; the isolated proteins were then in-solution digested and the resulting peptides were analyzed by nano-liquid chromatography coupled to tandem mass spectrometry. However, since quinoa is a nonmodel plant species, only a few protein sequences are included in the most widely known protein sequence databases. To improve the data reliability a UniProt subdatabase, containing only proteins of Caryophillales order, was used. A total of 352 proteins were identified and evaluated both from a qualitative and quantitative point of view. This combined approach is certainly useful to increase the final number of identifications, but no particular class of proteins was extracted and identified in spite of the different chemistries and the different precipitation protocols. However, with respect to the other two procedures, from the relative quantitative analysis, based on the number of spectral counts, the trichloroacetic acid/acetone protocol was the best procedure for sample handling and quantitative protein extraction. This study could pave the way to further high-throughput studies on Chenopodium Quinoa.

  9. Characterization of quinoa seed proteome combining different protein precipitation techniques: Improvement of knowledge of nonmodel plant proteomics.

    PubMed

    Capriotti, Anna Laura; Cavaliere, Chiara; Piovesana, Susy; Stampachiacchiere, Serena; Ventura, Salvatore; Zenezini Chiozzi, Riccardo; Laganà, Aldo

    2015-03-01

    A shotgun proteomics approach was used to characterize the quinoa seed proteome. To obtain comprehensive proteomic data from quinoa seeds three different precipitation procedures were employed: MeOH/CHCl3 /double-distilled H2 O, acetone either alone or with trichloroacetic acid; the isolated proteins were then in-solution digested and the resulting peptides were analyzed by nano-liquid chromatography coupled to tandem mass spectrometry. However, since quinoa is a nonmodel plant species, only a few protein sequences are included in the most widely known protein sequence databases. To improve the data reliability a UniProt subdatabase, containing only proteins of Caryophillales order, was used. A total of 352 proteins were identified and evaluated both from a qualitative and quantitative point of view. This combined approach is certainly useful to increase the final number of identifications, but no particular class of proteins was extracted and identified in spite of the different chemistries and the different precipitation protocols. However, with respect to the other two procedures, from the relative quantitative analysis, based on the number of spectral counts, the trichloroacetic acid/acetone protocol was the best procedure for sample handling and quantitative protein extraction. This study could pave the way to further high-throughput studies on Chenopodium Quinoa. PMID:25580831

  10. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  11. Proteomics meets blood banking: identification of protein targets for the improvement of platelet quality.

    PubMed

    Schubert, Peter; Devine, Dana V

    2010-01-01

    Proteomics has brought new perspectives to the fields of hematology and transfusion medicine in the last decade. The steady improvement of proteomic technology is propelling novel discoveries of molecular mechanisms by studying protein expression, post-translational modifications and protein interactions. This review article focuses on the application of proteomics to the identification of molecular mechanisms leading to the deterioration of blood platelets during storage - a critical aspect in the provision of platelet transfusion products. Several proteomic approaches have been employed to analyse changes in the platelet protein profile during storage and the obtained data now need to be translated into platelet biochemistry in order to connect the results to platelet function. Targeted biochemical applications then allow the identification of points for intervention in signal transduction pathways. Once validated and placed in a transfusion context, these data will provide further understanding of the underlying molecular mechanisms leading to platelet storage lesion. Future aspects of proteomics in blood banking will aim to make use of protein markers identified for platelet storage lesion development to monitor proteome changes when alterations such as the use of additive solutions or pathogen reduction strategies are put in place in order to improve platelet quality for patients.

  12. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  13. Large-scale motions in the universe

    SciTech Connect

    Rubin, V.C.; Coyne, G.V.

    1988-01-01

    The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.

  14. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  15. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  16. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  17. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  18. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  19. Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics.

    PubMed

    Keich, Uri; Kertesz-Farkas, Attila; Noble, William Stafford

    2015-08-01

    Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications.

  20. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  1. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  2. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  3. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  4. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  5. Complete solubilization of formalin-fixed, paraffin-embedded tissue may improve proteomic studies.

    PubMed

    Shi, Shan-Rong; Taylor, Clive R; Fowler, Carol B; Mason, Jeffrey T

    2013-04-01

    Tissue-based proteomic approaches (tissue proteomics) are essential for discovering and evaluating biomarkers for personalized medicine. In any proteomics study, the most critical issue is sample extraction and preparation. This problem is especially difficult when recovering proteins from formalin-fixed, paraffin-embedded (FFPE) tissue sections. However, improving and standardizing protein extraction from FFPE tissue is a critical need because of the millions of archival FFPE tissues available in tissue banks worldwide. Recent progress in the application of heat-induced antigen retrieval principles for protein extraction from FFPE tissue has resulted in a number of published FFPE tissue proteomics studies. However, there is currently no consensus on the optimal protocol for protein extraction from FFPE tissue or accepted standards for quantitative evaluation of the extracts. Standardization is critical to ensure the accurate evaluation of FFPE protein extracts by proteomic methods such as reverse phase protein arrays, which is now in clinical use. In our view, complete solubilization of FFPE tissue samples is the best way to achieve the goal of standardizing the recovery of proteins from FFPE tissues. However, further studies are recommended to develop standardized protein extraction methods to ensure quantitative and qualitative reproducibility in the recovery of proteins from FFPE tissues.

  6. Improving Proteome Coverage on a LTQ-Orbitrap Using Design of Experiments

    NASA Astrophysics Data System (ADS)

    Andrews, Genna L.; Dean, Ralph A.; Hawkridge, Adam M.; Muddiman, David C.

    2011-04-01

    Design of experiments (DOE) was used to determine improved settings for a LTQ-Orbitrap XL to maximize proteome coverage of Saccharomyces cerevisiae. A total of nine instrument parameters were evaluated with the best values affording an increase of approximately 60% in proteome coverage. Utilizing JMP software, 2 DOE screening design tables were generated and used to specify parameter values for instrument methods. DOE 1, a fractional factorial design, required 32 methods fully resolving the investigation of six instrument parameters involving only half the time necessary for a full factorial design of the same resolution. It was advantageous to complete a full factorial design for the analysis of three additional instrument parameters. Measured with a maximum of 1% false discovery rate, protein groups, unique peptides, and spectral counts gauged instrument performance. Randomized triplicate nanoLC-LTQ-Orbitrap XL MS/MS analysis of the S. cerevisiae digest demonstrated that the following five parameters significantly influenced proteome coverage of the sample: (1) maximum ion trap ionization time; (2) monoisotopic precursor selection; (3) number of MS/MS events; (4) capillary temperature; and (5) tube lens voltage. Minimal influence on the proteome coverage was observed for the remaining four parameters (dynamic exclusion duration, resolving power, minimum count threshold to trigger a MS/MS event, and normalized collision energy). The DOE approach represents a time- and cost-effective method for empirically optimizing MS-based proteomics workflows including sample preparation, LC conditions, and multiple instrument platforms.

  7. Proteomic revolution to improve tools for evaluating male fertility in animals.

    PubMed

    Park, Yoo-Jin; Kim, Jin; You, Young-Ah; Pang, Myung-Geol

    2013-11-01

    Artificial insemination has been used as a common breeding technique for the rapid dissemination of important genes to improve livestock quality. However, infertility or subfertility in the male leads to the disintegration of the breeding system and large economic losses. Therefore, the development of an accurate diagnostic protocol for male fertility is of critical importance. To this end, many basic laboratory assays have been developed on the basis of semen analysis. Although these assays may provide a preliminary estimate of male fertility, their accuracies are often unacceptably low. Therefore, it is vital to develop new semen analyses that are simple to use and accurate. Proteomic approaches will shed light on understanding sperm physiology and help in developing new diagnostic tools for male fertility. The aim of this study was to review the retrospective semen analyses and prospective proteomic studies of male fertility determination and usefulness of proteomic approaches in diagnosing male fertility potential in animal industry.

  8. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  9. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  10. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  11. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  12. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  13. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  14. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  15. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  16. Large-scale neuromorphic computing systems.

    PubMed

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  17. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  18. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  19. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  20. Proteomic insights into floral biology.

    PubMed

    Li, Xiaobai; Jackson, Aaron; Xie, Ming; Wu, Dianxing; Tsai, Wen-Chieh; Zhang, Sheng

    2016-08-01

    The flower is the most important biological structure for ensuring angiosperms reproductive success. Not only does the flower contain critical reproductive organs, but the wide variation in morphology, color, and scent has evolved to entice specialized pollinators, and arguably mankind in many cases, to ensure the successful propagation of its species. Recent proteomic approaches have identified protein candidates related to these flower traits, which has shed light on a number of previously unknown mechanisms underlying these traits. This review article provides a comprehensive overview of the latest advances in proteomic research in floral biology according to the order of flower structure, from corolla to male and female reproductive organs. It summarizes mainstream proteomic methods for plant research and recent improvements on two dimensional gel electrophoresis and gel-free workflows for both peptide level and protein level analysis. The recent advances in sequencing technologies provide a new paradigm for the ever-increasing genome and transcriptome information on many organisms. It is now possible to integrate genomic and transcriptomic data with proteomic results for large-scale protein characterization, so that a global understanding of the complex molecular networks in flower biology can be readily achieved. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. PMID:26945514

  1. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  2. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  3. Large-scale dynamics and global warming

    SciTech Connect

    Held, I.M. )

    1993-02-01

    Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.

  4. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  5. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  6. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  7. Large Scale Deformation of the Western U.S. Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2002-01-01

    The overall objective of the work that was conducted was to understand the present-day large-scale deformations of the crust throughout the western United States and in so doing to improve our ability to assess the potential for seismic hazards in this region. To address this problem, we used a large collection of Global Positioning System (GPS) networks which spans the region to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our results can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  8. MS1 Peptide Ion Intensity Chromatograms in MS2 (SWATH) Data Independent Acquisitions. Improving Post Acquisition Analysis of Proteomic Experiments.

    PubMed

    Rardin, Matthew J; Schilling, Birgit; Cheng, Lin-Yang; MacLean, Brendan X; Sorensen, Dylan J; Sahu, Alexandria K; MacCoss, Michael J; Vitek, Olga; Gibson, Bradford W

    2015-09-01

    Quantitative analysis of discovery-based proteomic workflows now relies on high-throughput large-scale methods for identification and quantitation of proteins and post-translational modifications. Advancements in label-free quantitative techniques, using either data-dependent or data-independent mass spectrometric acquisitions, have coincided with improved instrumentation featuring greater precision, increased mass accuracy, and faster scan speeds. We recently reported on a new quantitative method called MS1 Filtering (Schilling et al. (2012) Mol. Cell. Proteomics 11, 202-214) for processing data-independent MS1 ion intensity chromatograms from peptide analytes using the Skyline software platform. In contrast, data-independent acquisitions from MS2 scans, or SWATH, can quantify all fragment ion intensities when reference spectra are available. As each SWATH acquisition cycle typically contains an MS1 scan, these two independent label-free quantitative approaches can be acquired in a single experiment. Here, we have expanded the capability of Skyline to extract both MS1 and MS2 ion intensity chromatograms from a single SWATH data-independent acquisition in an Integrated Dual Scan Analysis approach. The performance of both MS1 and MS2 data was examined in simple and complex samples using standard concentration curves. Cases of interferences in MS1 and MS2 ion intensity data were assessed, as were the differentiation and quantitation of phosphopeptide isomers in MS2 scan data. In addition, we demonstrated an approach for optimization of SWATH m/z window sizes to reduce interferences using MS1 scans as a guide. Finally, a correlation analysis was performed on both MS1 and MS2 ion intensity data obtained from SWATH acquisitions on a complex mixture using a linear model that automatically removes signals containing interferences. This work demonstrates the practical advantages of properly acquiring and processing MS1 precursor data in addition to MS2 fragment ion

  9. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  10. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  11. Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions

    PubMed Central

    Delaforge, Elise; Milles, Sigrid; Huang, Jie-rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J.; Blackledge, Martin

    2016-01-01

    Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales. PMID:27679800

  12. Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions.

    PubMed

    Delaforge, Elise; Milles, Sigrid; Huang, Jie-Rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J; Blackledge, Martin

    2016-01-01

    Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales. PMID:27679800

  13. Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions

    PubMed Central

    Delaforge, Elise; Milles, Sigrid; Huang, Jie-rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J.; Blackledge, Martin

    2016-01-01

    Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales.

  14. A Simplified and Rapid Method for the Isolation and Transfection of Arabidopsis Leaf Mesophyll Protoplasts for Large-Scale Applications.

    PubMed

    Schapire, Arnaldo L; Lois, L Maria

    2016-01-01

    Arabidopsis leaf mesophyll protoplasts constitute an important and versatile tool for conducting cell-based experiments to analyze the functions of distinct signaling pathways and cellular machineries using proteomic, biochemical, cellular, genetic, and genomic approaches. Thus, the methods for protoplast isolation and transfection have been gradually improved to achieve efficient expression of genes of interest. Although many well-established protocols have been extensively tested, their successful application is sometimes limited to researchers with a high degree of skill and experience in protoplasts handling. Here we present a detailed method for the isolation and transfection of Arabidopsis mesophyll protoplasts, in which many of the time-consuming and critical steps present in the current protocols have been simplified. The method described is fast, simple, and leads to high yields of competent protoplasts allowing large-scale applications.

  15. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  16. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. PMID:25731989

  17. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.

  18. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  19. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  20. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  1. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  2. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  3. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  4. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  5. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  6. Large scale structures in transitional pipe flow

    NASA Astrophysics Data System (ADS)

    Hellström, Leo; Ganapathisubramani, Bharathram; Smits, Alexander

    2015-11-01

    We present a dual-plane snapshot POD analysis of transitional pipe flow at a Reynolds number of 3440, based on the pipe diameter. The time-resolved high-speed PIV data were simultaneously acquired in two planes, a cross-stream plane (2D-3C) and a streamwise plane (2D-2C) on the pipe centerline. The two light sheets were orthogonally polarized, allowing particles situated in each plane to be viewed independently. In the snapshot POD analysis, the modal energy is based on the cross-stream plane, while the POD modes are calculated using the dual-plane data. We present results on the emergence and decay of the energetic large scale motions during transition to turbulence, and compare these motions to those observed in fully developed turbulent flow. Supported under ONR Grant N00014-13-1-0174 and ERC Grant No. 277472.

  7. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  8. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  9. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  10. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  11. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  12. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  13. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  14. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  15. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  16. Nanoscale Proteomics

    SciTech Connect

    Shen, Yufeng; Tolic, Nikola; Masselon, Christophe D.; Pasa-Tolic, Liljiana; Camp, David G.; Anderson, Gordon A.; Smith, Richard D.; Lipton, Mary S.

    2004-02-01

    This paper describes efforts to develop a liquid chromatography (LC)/mass spectrometry (MS) technology for ultra-sensitive proteomics studies, i.e. nanoscale proteomics. The approach combines high-efficiency nano-scale LC with advanced MS, including high sensitivity and high resolution Fourier transform ion cyclotron resonance (FTICR) MS, to perform both single-stage MS and tandem MS (MS/MS) proteomic analyses. The technology developed enables large-scale protein identification from nanogram size proteomic samples and characterization of more abundant proteins from sub-picogram size complex samples. Protein identification in such studies using MS is feasible from <75 zeptomole of a protein, and the average proteome measurement throughput is >200 proteins/h and ~3 h/sample. Higher throughput (>1000 proteins/h) and more sensitive detection limits can be obtained using a “accurate mass and time” tag approach developed at our laboratory. These capabilities lay the foundation for studies from single or limited numbers of cells.

  17. Integration of RNA-seq and proteomics data with genomics for improved genome annotation in Apicomplexan parasites.

    PubMed

    Silmon de Monerri, Natalie C; Weiss, Louis M

    2015-08-01

    While high quality genomic sequence data is available for many pathogenic organisms, the corresponding gene annotations are often plagued with inaccuracies that can hinder research that utilizes such genomic data. Experimental validation of gene models is clearly crucial in improving such gene annotations; the field of proteogenomics is an emerging area of research wherein proteomic data is applied to testing and improving genetic models. Krishna et al. [Proteomics 2015, 15, 2618-2628] investigated whether incorporation of RNA-seq data into proteogenomics analyses can contribute significantly to validation studies of genome annotation, in two important parasitic organisms Toxoplasma gondii and Neospora caninum. They applied a systematic approach to combine new and previously published proteomics data from T. gondii and N. caninum with transcriptomics data, leading to substantially improved gene models for these organisms. This study illustrates the importance of incorporating experimental data from both proteomics and RNA-seq studies into routine genome annotation protocols.

  18. Differential Proteomics Analysis of Bacillus amyloliquefaciens and Its Genome-Shuffled Mutant for Improving Surfactin Production

    PubMed Central

    Zhao, Junfeng; Cao, Lin; Zhang, Chong; Zhong, Lei; Lu, Jing; Lu, Zhaoxin

    2014-01-01

    Genome shuffling technology was used as a novel whole-genome engineering approach to rapidly improve the antimicrobial lipopeptide yield of Bacillus amyloliquefaciens. Comparative proteomic analysis of the parental ES-2-4 and genome-shuffled FMB38 strains was conducted to examine the differentially expressed proteins. The proteome was separated by 2-DE (two dimensional electrophoresis) and analyzed by MS (mass spectrum). In the shuffled strain FMB38, 51 differentially expressed protein spots with higher than two-fold spot density were detected by gel image comparison. Forty-six protein spots were detectable by silver staining and further MS analysis. The results demonstrated that among the 46 protein spots expressed particularly induced in the genome-shuffled mutant, 15 were related to metabolism, five to DNA replication, recombination and repair, six to translation and post-translational modifications, one to cell secretion and signal transduction mechanisms, three to surfactin synthesis, two to energy production and conversion, and 14 to others. All these indicated that the metabolic capability of the mutant was improved by the genome shuffling. The study will enable future detailed investigation of gene expression and function linked with surfactin synthesis. The results of proteome analysis may provide information for metabolic engineering of Bacillus amyloliquefaciens for overproduction of surfactin. PMID:25365175

  19. Systems proteomics of liver mitochondria function.

    PubMed

    Williams, Evan G; Wu, Yibo; Jha, Pooja; Dubuis, Sébastien; Blattmann, Peter; Argmann, Carmen A; Houten, Sander M; Amariuta, Tiffany; Wolski, Witold; Zamboni, Nicola; Aebersold, Ruedi; Auwerx, Johan

    2016-06-10

    Recent improvements in quantitative proteomics approaches, including Sequential Window Acquisition of all Theoretical Mass Spectra (SWATH-MS), permit reproducible large-scale protein measurements across diverse cohorts. Together with genomics, transcriptomics, and other technologies, transomic data sets can be generated that permit detailed analyses across broad molecular interaction networks. Here, we examine mitochondrial links to liver metabolism through the genome, transcriptome, proteome, and metabolome of 386 individuals in the BXD mouse reference population. Several links were validated between genetic variants toward transcripts, proteins, metabolites, and phenotypes. Among these, sequence variants in Cox7a2l alter its protein's activity, which in turn leads to downstream differences in mitochondrial supercomplex formation. This data set demonstrates that the proteome can now be quantified comprehensively, serving as a key complement to transcriptomics, genomics, and metabolomics--a combination moving us forward in complex trait analysis. PMID:27284200

  20. Systems proteomics of liver mitochondria function.

    PubMed

    Williams, Evan G; Wu, Yibo; Jha, Pooja; Dubuis, Sébastien; Blattmann, Peter; Argmann, Carmen A; Houten, Sander M; Amariuta, Tiffany; Wolski, Witold; Zamboni, Nicola; Aebersold, Ruedi; Auwerx, Johan

    2016-06-10

    Recent improvements in quantitative proteomics approaches, including Sequential Window Acquisition of all Theoretical Mass Spectra (SWATH-MS), permit reproducible large-scale protein measurements across diverse cohorts. Together with genomics, transcriptomics, and other technologies, transomic data sets can be generated that permit detailed analyses across broad molecular interaction networks. Here, we examine mitochondrial links to liver metabolism through the genome, transcriptome, proteome, and metabolome of 386 individuals in the BXD mouse reference population. Several links were validated between genetic variants toward transcripts, proteins, metabolites, and phenotypes. Among these, sequence variants in Cox7a2l alter its protein's activity, which in turn leads to downstream differences in mitochondrial supercomplex formation. This data set demonstrates that the proteome can now be quantified comprehensively, serving as a key complement to transcriptomics, genomics, and metabolomics--a combination moving us forward in complex trait analysis.

  1. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  2. Effective Leveraging of Targeted Search Spaces for Improving Peptide Identification in Tandem Mass Spectrometry Based Proteomics.

    PubMed

    Shanmugam, Avinash K; Nesvizhskii, Alexey I

    2015-12-01

    In shotgun proteomics, peptides are typically identified using database searching, which involves scoring acquired tandem mass spectra against peptides derived from standard protein sequence databases such as Uniprot, Refseq, or Ensembl. In this strategy, the sensitivity of peptide identification is known to be affected by the size of the search space. Therefore, creating a targeted sequence database containing only peptides likely to be present in the analyzed sample can be a useful technique for improving the sensitivity of peptide identification. In this study, we describe how targeted peptide databases can be created based on the frequency of identification in the global proteome machine database (GPMDB), the largest publicly available repository of peptide and protein identification data. We demonstrate that targeted peptide databases can be easily integrated into existing proteome analysis workflows and describe a computational strategy for minimizing any loss of peptide identifications arising from potential search space incompleteness in the targeted search spaces. We demonstrate the performance of our workflow using several data sets of varying size and sample complexity.

  3. Improved metabolites of pharmaceutical ingredient grade Ginkgo biloba and the correlated proteomics analysis.

    PubMed

    Zheng, Wen; Li, Ximin; Zhang, Lin; Zhang, Yanzhen; Lu, Xiaoping; Tian, Jingkui

    2015-06-01

    Ginkgo biloba is an attractive and traditional medicinal plant, and has been widely used as a phytomedicine in the prevention and treatment of cardiovascular and cerebrovascular diseases. Flavonoids and terpene lactones are the major bioactive components of Ginkgo, whereas the ginkgolic acids (GAs) with strong allergenic properties are strictly controlled. In this study, we tested the content of flavonoids and GAs under ultraviolet-B (UV-B) treatment and performed comparative proteomic analyses to determine the differential proteins that occur upon UV-B radiation. That might play a crucial role in producing flavonoids and GAs. Our phytochemical analyses demonstrated that UV-B irradiation significantly increased the content of active flavonoids, and decreased the content of toxic GAs. We conducted comparative proteomic analysis of both whole leaf and chloroplasts proteins. In total, 27 differential proteins in the whole leaf and 43 differential proteins in the chloroplast were positively identified and functionally annotated. The proteomic data suggested that enhanced UV-B radiation exposure activated antioxidants and stress-responsive proteins as well as reduced the rate of photosynthesis. We demonstrate that UV-B irradiation pharmaceutically improved the metabolic ingredients of Ginkgo, particularly in terms of reducing GAs. With high UV absorption properties, and antioxidant activities, the flavonoids were likely highly induced as protective molecules following UV-B irradiation.

  4. Improved metabolites of pharmaceutical ingredient grade Ginkgo biloba and the correlated proteomics analysis.

    PubMed

    Zheng, Wen; Li, Ximin; Zhang, Lin; Zhang, Yanzhen; Lu, Xiaoping; Tian, Jingkui

    2015-06-01

    Ginkgo biloba is an attractive and traditional medicinal plant, and has been widely used as a phytomedicine in the prevention and treatment of cardiovascular and cerebrovascular diseases. Flavonoids and terpene lactones are the major bioactive components of Ginkgo, whereas the ginkgolic acids (GAs) with strong allergenic properties are strictly controlled. In this study, we tested the content of flavonoids and GAs under ultraviolet-B (UV-B) treatment and performed comparative proteomic analyses to determine the differential proteins that occur upon UV-B radiation. That might play a crucial role in producing flavonoids and GAs. Our phytochemical analyses demonstrated that UV-B irradiation significantly increased the content of active flavonoids, and decreased the content of toxic GAs. We conducted comparative proteomic analysis of both whole leaf and chloroplasts proteins. In total, 27 differential proteins in the whole leaf and 43 differential proteins in the chloroplast were positively identified and functionally annotated. The proteomic data suggested that enhanced UV-B radiation exposure activated antioxidants and stress-responsive proteins as well as reduced the rate of photosynthesis. We demonstrate that UV-B irradiation pharmaceutically improved the metabolic ingredients of Ginkgo, particularly in terms of reducing GAs. With high UV absorption properties, and antioxidant activities, the flavonoids were likely highly induced as protective molecules following UV-B irradiation. PMID:25604066

  5. Effective Leveraging of Targeted Search Spaces for Improving Peptide Identification in Tandem Mass Spectrometry Based Proteomics.

    PubMed

    Shanmugam, Avinash K; Nesvizhskii, Alexey I

    2015-12-01

    In shotgun proteomics, peptides are typically identified using database searching, which involves scoring acquired tandem mass spectra against peptides derived from standard protein sequence databases such as Uniprot, Refseq, or Ensembl. In this strategy, the sensitivity of peptide identification is known to be affected by the size of the search space. Therefore, creating a targeted sequence database containing only peptides likely to be present in the analyzed sample can be a useful technique for improving the sensitivity of peptide identification. In this study, we describe how targeted peptide databases can be created based on the frequency of identification in the global proteome machine database (GPMDB), the largest publicly available repository of peptide and protein identification data. We demonstrate that targeted peptide databases can be easily integrated into existing proteome analysis workflows and describe a computational strategy for minimizing any loss of peptide identifications arising from potential search space incompleteness in the targeted search spaces. We demonstrate the performance of our workflow using several data sets of varying size and sample complexity. PMID:26569054

  6. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  7. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  8. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  9. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  10. Simulations of Large Scale Structures in Cosmology

    NASA Astrophysics Data System (ADS)

    Liao, Shihong

    Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.

  11. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  12. Backscatter in Large-Scale Flows

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu

    2009-11-01

    Downgradient mixing of potential-voriticity and its variants are commonly employed to model the effects of unresolved geostrophic turbulence on resolved scales. This is motivated by the (inviscid and unforced) particle-wise conservation of potential-vorticity and the mean forward or down-scale cascade of potential enstrophy in geostrophic turubulence. By examining the statistical distribution of the transfer of potential enstrophy from mean or filtered motions to eddy or sub-filter motions, we find that the mean forward cascade results from the forward-scatter being only slightly greater than the backscatter. Downgradient mixing ideas, do not recognize such equitable mean-eddy or large scale-small scale interactions and consequently model only the mean effect of forward cascade; the importance of capturing the effects of backscatter---the forcing of resolved scales by unresolved scales---are only beginning to be recognized. While recent attempts to model the effects of backscatter on resolved scales have taken a stochastic approach, our analysis suggests that these effects are amenable to being modeled deterministically.

  13. Large scale molecular simulations of nanotoxicity.

    PubMed

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.

  14. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  15. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  16. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  17. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  18. Recent developments in quantitative proteomics.

    PubMed

    Becker, Christopher H; Bern, Marshall

    2011-06-17

    Proteomics is the study of proteins on a large scale, encompassing the many interests scientists and physicians have in their expression and physical properties. Proteomics continues to be a rapidly expanding field, with a wealth of reports regularly appearing on technology enhancements and scientific studies using these new tools. This review focuses primarily on the quantitative aspect of protein expression and the associated computational machinery for making large-scale identifications of proteins and their post-translational modifications. The primary emphasis is on the combination of liquid chromatography-mass spectrometry (LC-MS) methods and associated tandem mass spectrometry (LC-MS/MS). Tandem mass spectrometry, or MS/MS, involves a second analysis within the instrument after a molecular dissociative event in order to obtain structural information including but not limited to sequence information. This review further focuses primarily on the study of in vitro digested proteins known as bottom-up or shotgun proteomics. A brief discussion of recent instrumental improvements precedes a discussion on affinity enrichment and depletion of proteins, followed by a review of the major approaches (label-free and isotope-labeling) to making protein expression measurements quantitative, especially in the context of profiling large numbers of proteins. Then a discussion follows on the various computational techniques used to identify peptides and proteins from LC-MS/MS data. This review article then includes a short discussion of LC-MS approaches to three-dimensional structure determination and concludes with a section on statistics and data mining for proteomics, including comments on properly powering clinical studies and avoiding over-fitting with large data sets.

  19. Recent Developments in Quantitative Proteomics

    PubMed Central

    Becker, Christopher H.; Bern, Marshall

    2010-01-01

    Proteomics is the study of proteins on a large scale, encompassing the many interests scientists and physicians have in their expression and physical properties. Proteomics continues to be a rapidly expanding field, with a wealth of reports regularly appearing on technology enhancements and scientific studies using these new tools. This review focuses primarily on the quantitative aspect of protein expression and the associated computational machinery for making large-scale identifications of proteins and their post-translational modifications. The primary emphasis is on the combination of liquid chromatography-mass spectrometry (LC-MS) methods and associated tandem mass spectrometry (LC-MS/MS). Tandem mass spectrometry, or MS/MS, involves a second analysis within the instrument after a molecular dissociative event in order to obtain structural information including but not limited to sequence information. This review further focuses primarily on the study of in vitro digested proteins known as bottom-up or shotgun proteomics. A brief discussion of recent instrumental improvements precedes a discussion on affinity enrichment and depletion of proteins, followed by a review of the major approaches (label-free and isotope-labeling) to making protein expression measurements quantitative, especially in the context of profiling large numbers of proteins. Then a discussion follows on the various computational techniques used to identify peptides and proteins from LC-MS/MS data. This review article then includes a short discussion of LC-MS approaches to three-dimensional structure determination and concludes with a section on statistics and data mining for proteomics, including comments on properly powering clinical studies and avoiding over-fitting with large data sets. PMID:20620221

  20. Analysis and Management of Large-Scale Activities Based on Interface

    NASA Astrophysics Data System (ADS)

    Yang, Shaofan; Ji, Jingwei; Lu, Ligang; Wang, Zhiyi

    Based on the concepts of system safety engineering, life-cycle and interface that comes from American system safety standard MIL-STD-882E, and apply them to the process of risk analysis and management of large-scale activities. Identify the involved personnel, departments, funds and other contents throughout the life cycle of large-scale activities. Recognize and classify the ultimate risk sources of people, objects and environment of large-scale activities from the perspective of interface. Put forward the accident cause analysis model according to the previous large-scale activities' accidents and combine with the analysis of the risk source interface. Analyze the risks of each interface and summary various types of risks the large-scale activities faced. Come up with the risk management consciousness, policies and regulations, risk control and supervision departments improvement ideas.

  1. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  2. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  3. Why large-scale seasonal streamflow forecasts are feasible

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Candogan Yossef, N.; Van Beek, L. P.

    2011-12-01

    -based skill scores. In conclusion, the inertia of large-scale hydrological systems and a focus on anomalous flows make seasonal streamflow forecasts feasible and useful. To further improve on the current skill, investment is required in better state estimation, which can be achieved with better hydrological models (including better past forcing, i.e. analysis, products) and improved hydrological data-assimilation.

  4. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  5. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  6. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  7. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  8. Improved Proteomic Analysis Following Trichloroacetic Acid Extraction of Bacillus anthracis Spore Proteins

    SciTech Connect

    Kaiser, Brooke LD; Wunschel, David S.; Sydor, Michael A.; Warner, Marvin G.; Wahl, Karen L.; Hutchison, Janine R.

    2015-08-07

    Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Proteomic analysis is dependent upon efficient extraction of proteins from bacterial samples without introducing bias toward extraction of particular protein classes. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrich for certain classes of proteins. The method presented here is technically simple and does not require specialized equipment such as a mechanical disrupter. Our data reveal that for particularly challenging samples, such as B. anthracis Sterne spores, trichloroacetic acid extraction improved the number of proteins identified within a sample compared to bead beating (714 vs 660, respectively). Further, TCA extraction enriched for 103 known spore specific proteins whereas bead beating resulted in 49 unique proteins. Analysis of C. botulinum samples grown to 5 days, composed of vegetative biomass and spores, showed a similar trend with improved protein yields and identification using our method compared to bead beating. Interestingly, easily lysed samples, such as B. anthracis vegetative cells, were equally as effectively processed via TCA and bead beating, but TCA extraction remains the easiest and most cost effective option. As with all assays, supplemental methods such as implementation of an alternative preparation method may provide additional insight to the protein biology of the bacteria being studied.

  9. A comparison of the effectiveness of three parenting programmes in improving parenting skills, parent mental-well being and children's behaviour when implemented on a large scale in community settings in 18 English local authorities: the parenting early intervention pathfinder (PEIP)

    PubMed Central

    2011-01-01

    Background There is growing evidence that parenting programmes can improve parenting skills and thereby the behaviour of children exhibiting or at risk of developing antisocial behaviour. Given the high prevalence of childhood behaviour problems the task is to develop large scale application of effective programmes. The aim of this study was to evaluate the UK government funded implementation of the Parenting Early Intervention Pathfinder (PEIP). This involved the large scale rolling out of three programmes to parents of children 8-13 years in 18 local authorities (LAs) over a 2 year period. Methods The UK government's Department for Education allocated each programme (Incredible Years, Triple P and Strengthening Families Strengthening Communities) to six LAs which then developed systems to intervene using parenting groups. Implementation fidelity was supported by the training of group facilitators by staff of the appropriate parenting programme supplemented by supervision. Parents completed measures of parenting style, efficacy, satisfaction, and mental well-being, and also child behaviour. Results A total of 1121 parents completed pre- and post-course measures. There were significant improvements on all measures for each programme; effect sizes (Cohen's d) ranged across the programmes from 0.57 to 0.93 for parenting style; 0.33 to 0.77 for parenting satisfaction and self-efficacy; and from 0.49 to 0.88 for parental mental well-being. Effectiveness varied between programmes: Strengthening Families Strengthening Communities was significantly less effective than both the other two programmes in improving parental efficacy, satisfaction and mental well-being. Improvements in child behaviour were found for all programmes: effect sizes for reduction in conduct problems ranged from -0.44 to -0.71 across programmes, with Strengthening Families Strengthening Communities again having significantly lower reductions than Incredible Years. Conclusions Evidence-based parenting

  10. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  11. Urine proteomics for discovery of improved diagnostic markers of Kawasaki disease

    PubMed Central

    Kentsis, Alex; Shulman, Andrew; Ahmed, Saima; Brennan, Eileen; Monuteaux, Michael C; Lee, Young-Ho; Lipsett, Susan; Paulo, Joao A; Dedeoglu, Fatma; Fuhlbrigge, Robert; Bachur, Richard; Bradwin, Gary; Arditi, Moshe; Sundel, Robert P; Newburger, Jane W; Steen, Hanno; Kim, Susan

    2013-01-01

    Kawasaki disease (KD) is a systemic vasculitis of unknown etiology. Absence of definitive diagnostic markers limits the accuracy of clinical evaluations of suspected KD with significant increases in morbidity. In turn, incomplete understanding of its molecular pathogenesis hinders the identification of rational targets needed to improve therapy. We used high-accuracy mass spectrometry proteomics to analyse over 2000 unique proteins in clinical urine specimens of patients with KD. We discovered that urine proteomes of patients with KD, but not those with mimicking conditions, were enriched for markers of cellular injury such as filamin and talin, immune regulators such as complement regulator CSMD3, immune pattern recognition receptor muclin, and immune cytokine protease meprin A. Significant elevations of filamin C and meprin A were detected in both the serum and urine in two independent cohorts of patients with KD, comprised of a total of 236 patients. Meprin A and filamin C exhibited superior diagnostic performance as compared to currently used markers of disease in a blinded case-control study of 107 patients with suspected KD, with receiver operating characteristic areas under the curve of 0.98 (95% confidence intervals [CI] of 0.97–1 and 0.95–1, respectively). Notably, meprin A was enriched in the coronary artery lesions of a mouse model of KD. In all, urine proteome profiles revealed novel candidate molecular markers of KD, including filamin C and meprin A that exhibit excellent diagnostic performance. These disease markers may improve the diagnostic accuracy of clinical evaluations of children with suspected KD, lead to the identification of novel therapeutic targets, and allow the development of a biological classification of Kawasaki disease. PMID:23281308

  12. Strong CP Violation in Large Scale Magnetic Fields

    SciTech Connect

    Faccioli, P.; Millo, R.

    2007-11-19

    We explore the possibility of improving on the present experimental bounds on Strong CP violation, by studying processes in which the smallness of {theta} is compensated by the presence of some other very large scale. In particular, we study the response of the {theta} vacuum to large-scale magnetic fields, whose correlation lengths can be as large as the size of galaxy clusters. We find that, if strong interactions break CP, an external magnetic field would induce an electric vacuum polarization along the same direction. As a consequence, u,d-bar and d,u-bar quarks would accumulate in the opposite regions of the space, giving raise to an electric dipole moment. We estimate the magnitude of this effect both at T = 0 and for 0

  13. Extending large-scale forest inventories to assess urban forests.

    PubMed

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  14. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  15. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  16. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  17. Possible implications of large scale radiation processing of food

    NASA Astrophysics Data System (ADS)

    Zagórski, Z. P.

    Large scale irradiation has been discussed in terms of the participation of processing cost in the final value of the improved product. Another factor has been taken into account and that is the saturation of the market with the new product. In the case of succesful projects the participation of irradiation cost is low, and the demand for the better product is covered. A limited availability of sources makes the modest saturation of the market difficult with all food subjected to correct radiation treatment. The implementation of the preservation of food needs a decided selection of these kinds of food which comply to all conditions i.e. of acceptance by regulatory bodies, real improvement of quality and economy. The last condition prefers the possibility of use of electron beams of low energy. The best fullfilment of conditions for succesful processing is observed in the group of dry food, in expensive spices in particular.

  18. A Proteomic Workflow Using High-Throughput De Novo Sequencing Towards Complementation of Genome Information for Improved Comparative Crop Science.

    PubMed

    Turetschek, Reinhard; Lyon, David; Desalegn, Getinet; Kaul, Hans-Peter; Wienkoop, Stefanie

    2016-01-01

    The proteomic study of non-model organisms, such as many crop plants, is challenging due to the lack of comprehensive genome information. Changing environmental conditions require the study and selection of adapted cultivars. Mutations, inherent to cultivars, hamper protein identification and thus considerably complicate the qualitative and quantitative comparison in large-scale systems biology approaches. With this workflow, cultivar-specific mutations are detected from high-throughput comparative MS analyses, by extracting sequence polymorphisms with de novo sequencing. Stringent criteria are suggested to filter for confidential mutations. Subsequently, these polymorphisms complement the initially used database, which is ready to use with any preferred database search algorithm. In our example, we thereby identified 26 specific mutations in two cultivars of Pisum sativum and achieved an increased number (17 %) of peptide spectrum matches.

  19. A First Look at the Head Start CARES Demonstration: Large-Scale Implementation of Programs to Improve Children's Social-Emotional Competence. OPRE Report 2013-47

    ERIC Educational Resources Information Center

    Mattera, Shira Kolnik; Lloyd, Chrishana M.; Fishman, Mike; Bangser, Michael

    2013-01-01

    Low-income preschool children face many risks to their social-emotional development that can affect their school experience and social outcomes for years to come. Although there are some promising approaches to improving young children's social-emotional competence, the evidence base is limited, particularly on the effectiveness of these…

  20. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  1. High-Throughput Proteomics

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaorui; Wu, Si; Stenoien, David L.; Paša-Tolić, Ljiljana

    2014-06-01

    Mass spectrometry (MS)-based high-throughput proteomics is the core technique for large-scale protein characterization. Due to the extreme complexity of proteomes, sophisticated separation techniques and advanced MS instrumentation have been developed to extend coverage and enhance dynamic range and sensitivity. In this review, we discuss the separation and prefractionation techniques applied for large-scale analysis in both bottom-up (i.e., peptide-level) and top-down (i.e., protein-level) proteomics. Different approaches for quantifying peptides or intact proteins, including label-free and stable-isotope-labeling strategies, are also discussed. In addition, we present a brief overview of different types of mass analyzers and fragmentation techniques as well as selected emerging techniques.

  2. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  3. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  4. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  5. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors

    PubMed Central

    Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver

    2015-01-01

    Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878

  6. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  7. GPS for large-scale aerotriangulation

    NASA Astrophysics Data System (ADS)

    Rogowksi, Jerzy B.

    The application of GPS (Global Positioning System) measurements to photogrammetry is presented. The technology of establishment of a GPS network for aerotriangulation as a base for mapping at scales from 1:1000 has been worked out at the Institute of Geodesy and Geodetical Astronomy of the Warsaw University of Technology. This method consists of the design, measurement, and adjustment of this special network. The results of several pilot projects confirm the possibility of improving the aerotriangulation accuracy. A few-centimeter accuracy has been achieved.

  8. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems. We conclude that drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe and that some challenges remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage (e.g. aquifers) in large-scale models, also parametrisation of storage processes requires attention, for example through a global scale dataset on aquifer characteristics.

  9. Application of an Improved Proteomics Method for Abundant Protein Cleanup: Molecular and Genomic Mechanisms Study in Plant Defense*

    PubMed Central

    Zhang, Yixiang; Gao, Peng; Xing, Zhuo; Jin, Shumei; Chen, Zhide; Liu, Lantao; Constantino, Nasie; Wang, Xinwang; Shi, Weibing; Yuan, Joshua S.; Dai, Susie Y.

    2013-01-01

    High abundance proteins like ribulose-1,5-bisphosphate carboxylase oxygenase (Rubisco) impose a consistent challenge for the whole proteome characterization using shot-gun proteomics. To address this challenge, we developed and evaluated Polyethyleneimine Assisted Rubisco Cleanup (PARC) as a new method by combining both abundant protein removal and fractionation. The new approach was applied to a plant insect interaction study to validate the platform and investigate mechanisms for plant defense against herbivorous insects. Our results indicated that PARC can effectively remove Rubisco, improve the protein identification, and discover almost three times more differentially regulated proteins. The significantly enhanced shot-gun proteomics performance was translated into in-depth proteomic and molecular mechanisms for plant insect interaction, where carbon re-distribution was used to play an essential role. Moreover, the transcriptomic validation also confirmed the reliability of PARC analysis. Finally, functional studies were carried out for two differentially regulated genes as revealed by PARC analysis. Insect resistance was induced by over-expressing either jacalin-like or cupin-like genes in rice. The results further highlighted that PARC can serve as an effective strategy for proteomics analysis and gene discovery. PMID:23943779

  10. Application of an improved proteomics method for abundant protein cleanup: molecular and genomic mechanisms study in plant defense.

    PubMed

    Zhang, Yixiang; Gao, Peng; Xing, Zhuo; Jin, Shumei; Chen, Zhide; Liu, Lantao; Constantino, Nasie; Wang, Xinwang; Shi, Weibing; Yuan, Joshua S; Dai, Susie Y

    2013-11-01

    High abundance proteins like ribulose-1,5-bisphosphate carboxylase oxygenase (Rubisco) impose a consistent challenge for the whole proteome characterization using shot-gun proteomics. To address this challenge, we developed and evaluated Polyethyleneimine Assisted Rubisco Cleanup (PARC) as a new method by combining both abundant protein removal and fractionation. The new approach was applied to a plant insect interaction study to validate the platform and investigate mechanisms for plant defense against herbivorous insects. Our results indicated that PARC can effectively remove Rubisco, improve the protein identification, and discover almost three times more differentially regulated proteins. The significantly enhanced shot-gun proteomics performance was translated into in-depth proteomic and molecular mechanisms for plant insect interaction, where carbon re-distribution was used to play an essential role. Moreover, the transcriptomic validation also confirmed the reliability of PARC analysis. Finally, functional studies were carried out for two differentially regulated genes as revealed by PARC analysis. Insect resistance was induced by over-expressing either jacalin-like or cupin-like genes in rice. The results further highlighted that PARC can serve as an effective strategy for proteomics analysis and gene discovery.

  11. Sheltering in buildings from large-scale outdoor releases

    SciTech Connect

    Chan, W.R.; Price, P.N.; Gadgil, A.J.

    2004-06-01

    Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.

  12. Safeguards instruments for Large-Scale Reprocessing Plants

    SciTech Connect

    Hakkila, E.A.; Case, R.S.; Sonnier, C.

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  13. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  14. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets. PMID:25643405

  15. The role of large-scale, extratropical dynamics in climate change

    SciTech Connect

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  16. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  17. Toward improving the proteomic analysis of formalin-fixed, paraffin-embedded tissue.

    PubMed

    Fowler, Carol B; O'Leary, Timothy J; Mason, Jeffrey T

    2013-08-01

    Archival formalin-fixed, paraffin-embedded (FFPE) tissue and their associated diagnostic records represent an invaluable source of retrospective proteomic information on diseases for which the clinical outcome and response to treatment are known. However, analysis of archival FFPE tissues by high-throughput proteomic methods has been hindered by the adverse effects of formaldehyde fixation and subsequent tissue histology. This review examines recent methodological advances for extracting proteins from FFPE tissue suitable for proteomic analysis. These methods, based largely upon heat-induced antigen retrieval techniques borrowed from immunohistochemistry, allow at least a qualitative analysis of the proteome of FFPE archival tissues. The authors also discuss recent advances in the proteomic analysis of FFPE tissue; including liquid-chromatography tandem mass spectrometry, reverse phase protein microarrays and imaging mass spectrometry.

  18. The Status of Large-Scale Assessment in the Pacific Region. REL Technical Brief. REL 2008-No. 003

    ERIC Educational Resources Information Center

    Ryan, Jennifer; Keir, Scott

    2008-01-01

    This technical brief describes the large-scale assessment measures and practices used in the jurisdictions served by the Pacific Regional Educational Laboratory. The need for effective large-scale assessment was identified as a major priority for improving student achievement in the Pacific Region jurisdictions: American Samoa, Guam, Hawaii, the…

  19. Applications of Proteomic Technologies to Toxicology

    EPA Science Inventory

    Proteomics is the large-scale study of gene expression at the protein level. This cutting edge technology has been extensively applied to toxicology research recently. The up-to-date development of proteomics has presented the toxicology community with an unprecedented opportunit...

  20. Improving peptide identification sensitivity in shotgun proteomics by stratification of search space.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2013-06-01

    Because of its high specificity, trypsin is the enzyme of choice in shotgun proteomics. Nonetheless, several publications do report the identification of semitryptic and nontryptic peptides. Many of these peptides are thought to be signaling peptides or to have formed during sample preparation. It is known that only a small fraction of tandem mass spectra from a trypsin-digested protein mixture can be confidently matched to tryptic peptides. If other possibilities such as post-translational modifications and single-amino acid polymorphisms are ignored, this suggests that many unidentified spectra originate from semitryptic and nontryptic peptides. To include them in database searches, however, may not improve overall peptide identification because of the possible sensitivity reduction from search space expansion. To circumvent this issue for E-value-based search methods, we have designed a scheme that categorizes qualified peptides (i.e., peptides whose differences in molecular weight from the parent ion are within a specified error tolerance) into three tiers: tryptic, semitryptic, and nontryptic. This classification allows peptides that belong to different tiers to have different Bonferroni correction factors. Our results show that this scheme can significantly improve retrieval performance compared to those of search strategies that assign equal Bonferroni correction factors to all qualified peptides. PMID:23668635

  1. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986

  2. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  3. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  4. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  5. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  6. False Discovery Rate Estimation in Proteomics.

    PubMed

    Aggarwal, Suruchi; Yadav, Amit Kumar

    2016-01-01

    With the advancement in proteomics separation techniques and improvements in mass analyzers, the data generated in a mass-spectrometry based proteomics experiment is rising exponentially. Such voluminous datasets necessitate automated computational tools for high-throughput data analysis and appropriate statistical control. The data is searched using one or more of the several popular database search algorithms. The matches assigned by these tools can have false positives and statistical validation of these false matches is necessary before making any biological interpretations. Without such procedures, the biological inferences do not hold true and may be outright misleading. There is a considerable overlap between true and false positives. To control the false positives amongst a set of accepted matches, there is a need for some statistical estimate that can reflect the amount of false positives present in the data processed. False discovery rate (FDR) is the metric for global confidence assessment of a large-scale proteomics dataset. This chapter covers the basics of FDR, its application in proteomics, and methods to estimate FDR.

  7. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  8. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  9. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  10. A large-scale electrophoresis- and chromatography-based determination of gene expression profiles in bovine brain capillary endothelial cells after the re-induction of blood-brain barrier properties

    PubMed Central

    2010-01-01

    Background Brain capillary endothelial cells (BCECs) form the physiological basis of the blood-brain barrier (BBB). The barrier function is (at least in part) due to well-known proteins such as transporters, tight junctions and metabolic barrier proteins (e.g. monoamine oxidase, gamma glutamyltranspeptidase and P-glycoprotein). Our previous 2-dimensional gel proteome analysis had identified a large number of proteins and revealed the major role of dynamic cytoskeletal remodelling in the differentiation of bovine BCECs. The aim of the present study was to elaborate a reference proteome of Triton X-100-soluble species from bovine BCECs cultured in the well-established in vitro BBB model developed in our laboratory. Results A total of 215 protein spots (corresponding to 130 distinct proteins) were identified by 2-dimensional gel electrophoresis, whereas over 350 proteins were identified by a shotgun approach. We classified around 430 distinct proteins expressed by bovine BCECs. Our large-scale gene expression analysis enabled the correction of mistakes referenced into protein databases (e.g. bovine vinculin) and constitutes valuable evidence for predictions based on genome annotation. Conclusions Elaboration of a reference proteome constitutes the first step in creating a gene expression database dedicated to capillary endothelial cells displaying BBB characteristics. It improves of our knowledge of the BBB and the key proteins in cell structures, cytoskeleton organization, metabolism, detoxification and drug resistance. Moreover, our results emphasize the need for both appropriate experimental design and correct interpretation of proteome datasets. PMID:21078152

  11. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  12. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  13. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems. We conclude that most drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe. Challenges, however, remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. This leads to a high uncertainty in hydrological drought simulation at large scales. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage in large-scale models, also parametrisation of storage processes requires attention, for example through a global-scale dataset on aquifer characteristics, improved large-scale datasets on other land characteristics (e.g. soils, land cover), and calibration/evaluation of the models against observations of storage (e.g. in snow, groundwater).

  14. Kinematics and Dynamics in Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    dell'Antonio, Ian Pietro

    1995-01-01

    We study a sample of x-ray observed groups of galaxies to examine the relation between group velocity dispersions and x-ray luminosities. For the rich groups, Lx~ sigma ^{4.0+/-0.6}, but poorer systems follow a flatter relation. This L_{x }- sigma relation probably arises from a combination of extended gas and individual galaxy emission. We then concentrate on six poor clusters of galaxies with higher-quality x-ray data, and we measure the virial mass, gas mass, and x-ray temperature. From the x-ray surface brightness distribution, we construct models of the mass distribution. We use a modified V/ Vmax test to test whether the galaxies trace the potential marked by the gas. The galaxy distribution is consistent with the density distribution inferred from the x-rays. The mass in galaxies is {~}3h^{-1}% of the total mass of the systems. Galaxies contribute significantly to the baryonic mass total: M_ {gas}/Mgal ~1.4h^{-1/2},~ilar to the value for rich clusters. The baryon fraction in rich groups is {~}0.08 (for Ho=100), about half that in rich clusters. This result has significant implications for the origin of large-scale structure. In a study of structure on a larger scale, we use the Tully-Fisher (TF) relation to examine the kinematics of the Great Wall of Galaxies. First, we examine the relation between rotation profiles of galaxies and HI linewidths, and investigate the effects on the TF relation. The rotation curve profile shapes and magnitudes of galaxies are correlated, implying that a galaxy yields different distance estimates with a linewidth measured at a different fraction of peak emission. Indiscriminatingly combining data based on different measures of the "rotation velocity" into a single TF relation leads to systematic errors and biases in the velocity field. We evaluate these effects using optical rotation curves and HI linewidth data. The TF relation can be improved by adding shape parameters to characterize the HI profiles. We construct the I

  15. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  16. Characterizing unknown systematics in large scale structure surveys

    SciTech Connect

    Agarwal, Nishant; Ho, Shirley; Myers, Adam D.; Seo, Hee-Jong; Ross, Ashley J.; Bahcall, Neta; Brinkmann, Jonathan; Eisenstein, Daniel J.; Muna, Demitri; Palanque-Delabrouille, Nathalie; Yèche, Christophe; Petitjean, Patrick; Schneider, Donald P.; Streblyanska, Alina; Weaver, Benjamin A.

    2014-04-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.

  17. Scalable pattern recognition for large-scale scientific data mining

    SciTech Connect

    Kamath, C.; Musick, R.

    1998-03-23

    Our ability to generate data far outstrips our ability to explore and understand it. The true value of this data lies not in its final size or complexity, but rather in our ability to exploit the data to achieve scientific goals. The data generated by programs such as ASCI have such a large scale that it is impractical to manually analyze, explore, and understand it. As a result, useful information is overlooked, and the potential benefits of increased computational and data gathering capabilities are only partially realized. The difficulties that will be faced by ASCI applications in the near future are foreshadowed by the challenges currently facing astrophysicists in making full use of the data they have collected over the years. For example, among other difficulties, astrophysicists have expressed concern that the sheer size of their data restricts them to looking at very small, narrow portions at any one time. This narrow focus has resulted in the loss of ``serendipitous`` discoveries which have been so vital to progress in the area in the past. To solve this problem, a new generation of computational tools and techniques is needed to help automate the exploration and management of large scientific data. This whitepaper proposes applying and extending ideas from the area of data mining, in particular pattern recognition, to improve the way in which scientists interact with large, multi-dimensional, time-varying data.

  18. Assessing large-scale wildlife responses to human infrastructure development.

    PubMed

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.

  19. Assessing large-scale wildlife responses to human infrastructure development.

    PubMed

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future. PMID:27402749

  20. Large-scale sequencing and analytical processing of ESTs.

    PubMed

    Mitreva, Makedonka; Mardis, Elaine R

    2009-01-01

    Expressed sequence tags (ESTs) have proven to be one of the most rapid and cost-effective routes to gene discovery for eukaryotic genomes. Furthermore, their multipurpose uses, such as in probe design for microarrays, determining alternative splicing, verifying open reading frames, and confirming exon/intron and gene boundaries, to name a few, further justify their inclusion in many genomic characterization projects. Hence, there has been a constant increase in the number of ESTs deposited into the dbEST division of GenBank. This trend also correlates to ever-improving molecular techniques for obtaining biological material, performing RNA extraction, and constructing cDNA libraries, and predominantly to ever-evolving sequencing chemistry and instrumentation, as well as to decreased sequencing costs. This chapter describes large-scale sequencing of ESTs on two distinct platforms: the ABI 3730xl and the 454 Life Sciences GS20 sequencers, and the corresponding processes of sequence extraction, processing, and submissions to public databases. While the conventional 3730xl sequencing process is described, starting with the plating of an already-existing cDNA library, the section on 454 GS20 pyrosequencing also includes a method for generating full-length cDNA sequences. With appropriate bioinformatics tools, each of these platforms either used independently or coupled together provide a powerful combination for comprehensive exploration of an organism's transcriptome.

  1. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  2. Implicit solvers for large-scale nonlinear problems

    SciTech Connect

    Keyes, D E; Reynolds, D; Woodward, C S

    2006-07-13

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications.

  3. Replicating Reforms in a Large-scale Lecture Environment

    NASA Astrophysics Data System (ADS)

    Finkelstein, Noah; Pollock, S.

    2006-12-01

    We present a longitudinal study of the implementation of a series of reforms in the large-scale, calculus based introductory physics sequence at University of Colorado. As part of the Colorado Physics Teacher Education Coalition and an NSF CCLI grant, we have implemented Tutorials in Introductory Physics, Peer Instruction, personalized computerized homework sets, and in-class personal response systems[1]. While we have demonstrated that these combined efforts result in significant improvement in student learning gains [1], we turn our attention to what it means to hand off these course transformation to faculty who have historically focussed on more traditional methods (e.g. those who are not members of AAPT). We present empirical data on the success and fidelity of implementation of the reforms, and identify two key factors in the overall program success: 1) Colorado's Learning Assistant program [2] which enables these course transformation, while simultaneously increasing the pool of talented physics teachers and explicitly valuing teaching and education within physics, and 2) explicit efforts to support faculty change as they adopt new educational tools and practices. [1] N.D. Finkelstein and S.J. Pollock, (2005). Replicating and Understanding Successful Innovations: Implementing Tutorials in Introductory Physics. Physical Review, Special Topics: Physics Education Research.1,1, 010101. [2] V. Otero, N.D. Finkelstein, S.J. Pollock and R. McCray, (2006). Who is Responsible for Preparing Science Teachers, Science, 313, 445.

  4. Heterogeneous Graph Propagation for Large-Scale Web Image Search.

    PubMed

    Xie, Lingxi; Tian, Qi; Zhou, Wengang; Zhang, Bo

    2015-11-01

    State-of-the-art web image search frameworks are often based on the bag-of-visual-words (BoVWs) model and the inverted index structure. Despite the simplicity, efficiency, and scalability, they often suffer from low precision and/or recall, due to the limited stability of local features and the considerable information loss on the quantization stage. To refine the quality of retrieved images, various postprocessing methods have been adopted after the initial search process. In this paper, we investigate the online querying process from a graph-based perspective. We introduce a heterogeneous graph model containing both image and feature nodes explicitly, and propose an efficient reranking approach consisting of two successive modules, i.e., incremental query expansion and image-feature voting, to improve the recall and precision, respectively. Compared with the conventional reranking algorithms, our method does not require using geometric information of visual words, therefore enjoys low consumptions of both time and memory. Moreover, our method is independent of the initial search process, and could cooperate with many BoVW-based image search pipelines, or adopted after other postprocessing algorithms. We evaluate our approach on large-scale image search tasks and verify its competitive search performance. PMID:25974934

  5. Scalable NIC-based reduction on large-scale clusters

    SciTech Connect

    Moody, A.; Fernández, J. C.; Petrini, F.; Panda, Dhabaleswar K.

    2003-01-01

    Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scale clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library

  6. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  7. Locating inefficient links in a large-scale transportation network

    NASA Astrophysics Data System (ADS)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  8. Open TG-GATEs: a large-scale toxicogenomics database

    PubMed Central

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  9. Advancing cell biology through proteomics in space and time (PROSPECTS).

    PubMed

    Lamond, Angus I; Uhlen, Mathias; Horning, Stevan; Makarov, Alexander; Robinson, Carol V; Serrano, Luis; Hartl, F Ulrich; Baumeister, Wolfgang; Werenskiold, Anne Katrin; Andersen, Jens S; Vorm, Ole; Linial, Michal; Aebersold, Ruedi; Mann, Matthias

    2012-03-01

    The term "proteomics" encompasses the large-scale detection and analysis of proteins and their post-translational modifications. Driven by major improvements in mass spectrometric instrumentation, methodology, and data analysis, the proteomics field has burgeoned in recent years. It now provides a range of sensitive and quantitative approaches for measuring protein structures and dynamics that promise to revolutionize our understanding of cell biology and molecular mechanisms in both human cells and model organisms. The Proteomics Specification in Time and Space (PROSPECTS) Network is a unique EU-funded project that brings together leading European research groups, spanning from instrumentation to biomedicine, in a collaborative five year initiative to develop new methods and applications for the functional analysis of cellular proteins. This special issue of Molecular and Cellular Proteomics presents 16 research papers reporting major recent progress by the PROSPECTS groups, including improvements to the resolution and sensitivity of the Orbitrap family of mass spectrometers, systematic detection of proteins using highly characterized antibody collections, and new methods for absolute as well as relative quantification of protein levels. Manuscripts in this issue exemplify approaches for performing quantitative measurements of cell proteomes and for studying their dynamic responses to perturbation, both during normal cellular responses and in disease mechanisms. Here we present a perspective on how the proteomics field is moving beyond simply identifying proteins with high sensitivity toward providing a powerful and versatile set of assay systems for characterizing proteome dynamics and thereby creating a new "third generation" proteomics strategy that offers an indispensible tool for cell biology and molecular medicine.

  10. Ensemble assimilation of global large-scale precipitation

    NASA Astrophysics Data System (ADS)

    Lien, Guo-Yuan

    Many attempts to assimilate precipitation observations in numerical models have been made, but they have resulted in little or no forecast improvement at the end of the precipitation assimilation. This is due to the nonlinearity of the model precipitation parameterization, the non-Gaussianity of precipitation variables, and the large and unknown model and observation errors. In this study, we investigate the assimilation of global large-scale satellite precipitation using the local ensemble transform Kalman filter (LETKF). The LETKF does not require linearization of the model, and it can improve all model variables by giving higher weights in the analysis to ensemble members with better precipitation, so that the model will "remember" the assimilation changes during the forecasts. Gaussian transformations of precipitation are applied to both model background precipitation and observed precipitation, which not only makes the error distributions more Gaussian, but also removes the amplitude-dependent biases between the model and the observations. In addition, several quality control criteria are designed to reject precipitation observations that are not useful for the assimilation. Our ideas are tested in both an idealized system and a realistic system. In the former, observing system simulation experiments (OSSEs) are conducted with a simplified general circulation model; in the latter, the TRMM Multisatellite Precipitation Analysis (TMPA) data are assimilated into a low-resolution version of the NCEP Global Forecasting System (GFS). Positive results are obtained in both systems, showing that both the analyses and the 5-day forecasts are improved by the effective assimilation of precipitation. We also demonstrate how to use the ensemble forecast sensitivity to observations (EFSO) to analyze the effectiveness of precipitation assimilation and provide guidance for determining appropriate quality control. These results are very promising for the direct assimilation of

  11. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  12. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    PubMed

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. PMID:27568589

  13. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    PubMed Central

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology. PMID:24191145

  14. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  15. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology. PMID:24191145

  16. Combining p-values in large-scale genomics experiments.

    PubMed

    Zaykin, Dmitri V; Zhivotovsky, Lev A; Czika, Wendy; Shao, Susan; Wolfinger, Russell D

    2007-01-01

    In large-scale genomics experiments involving thousands of statistical tests, such as association scans and microarray expression experiments, a key question is: Which of the L tests represent true associations (TAs)? The traditional way to control false findings is via individual adjustments. In the presence of multiple TAs, p-value combination methods offer certain advantages. Both Fisher's and Lancaster's combination methods use an inverse gamma transformation. We identify the relation of the shape parameter of that distribution to the implicit threshold value; p-values below that threshold are favored by the inverse gamma method (GM). We explore this feature to improve power over Fisher's method when L is large and the number of TAs is moderate. However, the improvement in power provided by combination methods is at the expense of a weaker claim made upon rejection of the null hypothesis - that there are some TAs among the L tests. Thus, GM remains a global test. To allow a stronger claim about a subset of p-values that is smaller than L, we investigate two methods with an explicit truncation: the rank truncated product method (RTP) that combines the first K-ordered p-values, and the truncated product method (TPM) that combines p-values that are smaller than a specified threshold. We conclude that TPM allows claims to be made about subsets of p-values, while the claim of the RTP is, like GM, more appropriately about all L tests. GM gives somewhat higher power than TPM, RTP, Fisher, and Simes methods across a range of simulations. PMID:17879330

  17. Combining p-values in large scale genomics experiments

    PubMed Central

    Zaykin, Dmitri V.; Zhivotovsky, Lev A.; Czika, Wendy; Shao, Susan; Wolfinger, Russell D.

    2008-01-01

    Summary In large-scale genomics experiments involving thousands of statistical tests, such as association scans and microarray expression experiments, a key question is: Which of the L tests represent true associations (TAs)? The traditional way to control false findings is via individual adjustments. In the presence of multiple TAs, p-value combination methods offer certain advantages. Both Fisher’s and Lancaster’s combination methods use an inverse gamma transformation. We identify the relation of the shape parameter of that distribution to the implicit threshold value; p-values below that threshold are favored by the inverse gamma method (GM). We explore this feature to improve power over Fisher’s method when L is large and the number of TAs is moderate. However, the improvement in power provided by combination methods is at the expense of a weaker claim made upon rejection of the null hypothesis – that there are some TAs among the L tests. Thus, GM remains a global test. To allow a stronger claim about a subset of p-values that is smaller than L, we investigate two methods with an explicit truncation: the rank truncated product method (RTP) that combines the first K ordered p-values, and the truncated product method (TPM) that combines p-values that are smaller than a specified threshold. We conclude that TPM allows claims to be made about subsets of p-values, while the claim of the RTP is, like GM, more appropriately about all L tests. GM gives somewhat higher power than TPM, RTP, Fisher, and Simes methods across a range of simulations. PMID:17879330

  18. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  19. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  20. Reflections on the Increasing Relevance of Large-Scale Professional Development

    ERIC Educational Resources Information Center

    Krainer, Konrad

    2015-01-01

    This paper focuses on commonalities and differences of three approaches to large-scale professional development (PD) in mathematics education, based on two studies from Germany and one from the United States of America. All three initiatives break new ground in improving PD targeted at educating "multipliers", and in all three cases…

  1. Inquiry-Based Educational Design for Large-Scale High School Astronomy Projects Using Real Telescopes

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena

    2015-01-01

    In this paper, we outline the theory behind the educational design used to implement a large-scale high school astronomy education project. This design was created in response to the realization of ineffective educational design in the initial early stages of the project. The new design follows an iterative improvement model where the materials…

  2. Learning from decoys to improve the sensitivity and specificity of proteomics database search results.

    PubMed

    Yadav, Amit Kumar; Kumar, Dhirendra; Dash, Debasis

    2012-01-01

    The statistical validation of database search results is a complex issue in bottom-up proteomics. The correct and incorrect peptide spectrum match (PSM) scores overlap significantly, making an accurate assessment of true peptide matches challenging. Since the complete separation between the true and false hits is practically never achieved, there is need for better methods and rescoring algorithms to improve upon the primary database search results. Here we describe the calibration and False Discovery Rate (FDR) estimation of database search scores through a dynamic FDR calculation method, FlexiFDR, which increases both the sensitivity and specificity of search results. Modelling a simple linear regression on the decoy hits for different charge states, the method maximized the number of true positives and reduced the number of false negatives in several standard datasets of varying complexity (18-mix, 49-mix, 200-mix) and few complex datasets (E. coli and Yeast) obtained from a wide variety of MS platforms. The net positive gain for correct spectral and peptide identifications was up to 14.81% and 6.2% respectively. The approach is applicable to different search methodologies--separate as well as concatenated database search, high mass accuracy, and semi-tryptic and modification searches. FlexiFDR was also applied to Mascot results and showed better performance than before. We have shown that appropriate threshold learnt from decoys, can be very effective in improving the database search results. FlexiFDR adapts itself to different instruments, data types and MS platforms. It learns from the decoy hits and sets a flexible threshold that automatically aligns itself to the underlying variables of data quality and size.

  3. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  4. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  5. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  6. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  7. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  8. Human pescadillo induces large-scale chromatin unfolding.

    PubMed

    Zhang, Hao; Fang, Yan; Huang, Cuifen; Yang, Xiao; Ye, Qinong

    2005-06-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  9. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  10. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  11. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  12. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  13. Large-scale superfluid vortex rings at nonzero temperatures

    NASA Astrophysics Data System (ADS)

    Wacks, D. H.; Baggaley, A. W.; Barenghi, C. F.

    2014-12-01

    We numerically model experiments in which large-scale vortex rings—bundles of quantized vortex loops—are created in superfluid helium by a piston-cylinder arrangement. We show that the presence of a normal-fluid vortex ring together with the quantized vortices is essential to explain the coherence of these large-scale vortex structures at nonzero temperatures, as observed experimentally. Finally we argue that the interaction of superfluid and normal-fluid vortex bundles is relevant to recent investigations of superfluid turbulence.

  14. Developments in large-scale coastal flood hazard mapping

    NASA Astrophysics Data System (ADS)

    Vousdoukas, Michalis I.; Voukouvalas, Evangelos; Mentaschi, Lorenzo; Dottori, Francesco; Giardino, Alessio; Bouziotas, Dimitrios; Bianchi, Alessandra; Salamon, Peter; Feyen, Luc

    2016-08-01

    Coastal flooding related to marine extreme events has severe socioeconomic impacts, and even though the latter are projected to increase under the changing climate, there is a clear deficit of information and predictive capacity related to coastal flood mapping. The present contribution reports on efforts towards a new methodology for mapping coastal flood hazard at European scale, combining (i) the contribution of waves to the total water level; (ii) improved inundation modeling; and (iii) an open, physics-based framework which can be constantly upgraded, whenever new and more accurate data become available. Four inundation approaches of gradually increasing complexity and computational costs were evaluated in terms of their applicability to large-scale coastal flooding mapping: static inundation (SM); a semi-dynamic method, considering the water volume discharge over the dykes (VD); the flood intensity index approach (Iw); and the model LISFLOOD-FP (LFP). A validation test performed against observed flood extents during the Xynthia storm event showed that SM and VD can lead to an overestimation of flood extents by 232 and 209 %, while Iw and LFP showed satisfactory predictive skill. Application at pan-European scale for the present-day 100-year event confirmed that static approaches can overestimate flood extents by 56 % compared to LFP; however, Iw can deliver results of reasonable accuracy in cases when reduced computational costs are a priority. Moreover, omitting the wave contribution in the extreme total water level (TWL) can result in a ˜ 60 % underestimation of the flooded area. The present findings have implications for impact assessment studies, since combination of the estimated inundation maps with population exposure maps revealed differences in the estimated number of people affected within the 20-70 % range.

  15. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  16. Survey Design for Large-Scale, Unstructured Resistivity Surveys

    NASA Astrophysics Data System (ADS)

    Labrecque, D. J.; Casale, D.

    2009-12-01

    In this paper, we discuss the issues in designing data collection strategies for large-scale, poorly structured resistivity surveys. Existing or proposed applications for these types of surveys include carbon sequestration, enhanced oil recovery monitoring, monitoring of leachate from working or abandoned mines, and mineral surveys. Electrode locations are generally chosen by land access, utilities, roads, existing wells etc. Classical arrays such as the Wenner array or dipole-dipole arrays are not applicable if the electrodes cannot be placed in quasi-regular lines or grids. A new, far more generalized strategy is needed for building data collection schemes. Following the approach of earlier two-dimensional (2-D) survey designs, the proposed method begins by defining a base array. In (2-D) design, this base array is often a standard dipole-dipole array. For unstructured three-dimensional (3-D) design, determining this base array is a multi-step process. The first step is to determine a set of base dipoles with similar characteristics. For example, the base dipoles may consist of electrode pairs trending within 30 degrees of north and with a length between 100 and 250 m in length. These dipoles are then combined into a trial set of arrays. This trial set of arrays is reduced by applying a series of filters based on criteria such as separation between the dipoles. Using the base array set, additional arrays are added and tested to determine the overall improvement in resolution and to determine an optimal set of arrays. Examples of the design process are shown for a proposed carbon sequestration monitoring system.

  17. Large-Scale Candidate Gene Analysis of HDL Particle Features

    PubMed Central

    Kaess, Bernhard M.; Tomaszewski, Maciej; Braund, Peter S.; Stark, Klaus; Rafelt, Suzanne; Fischer, Marcus; Hardwick, Robert; Nelson, Christopher P.; Debiec, Radoslaw; Huber, Fritz; Kremer, Werner; Kalbitzer, Hans Robert; Rose, Lynda M.; Chasman, Daniel I.; Hopewell, Jemma; Clarke, Robert; Burton, Paul R.; Tobin, Martin D.

    2011-01-01

    Background HDL cholesterol (HDL-C) is an established marker of cardiovascular risk with significant genetic determination. However, HDL particles are not homogenous, and refined HDL phenotyping may improve insight into regulation of HDL metabolism. We therefore assessed HDL particles by NMR spectroscopy and conducted a large-scale candidate gene association analysis. Methodology/Principal Findings We measured plasma HDL-C and determined mean HDL particle size and particle number by NMR spectroscopy in 2024 individuals from 512 British Caucasian families. Genotypes were 49,094 SNPs in >2,100 cardiometabolic candidate genes/loci as represented on the HumanCVD BeadChip version 2. False discovery rates (FDR) were calculated to account for multiple testing. Analyses on classical HDL-C revealed significant associations (FDR<0.05) only for CETP (cholesteryl ester transfer protein; lead SNP rs3764261: p = 5.6*10−15) and SGCD (sarcoglycan delta; rs6877118: p = 8.6*10−6). In contrast, analysis with HDL mean particle size yielded additional associations in LIPC (hepatic lipase; rs261332: p = 6.1*10−9), PLTP (phospholipid transfer protein, rs4810479: p = 1.7*10−8) and FBLN5 (fibulin-5; rs2246416: p = 6.2*10−6). The associations of SGCD and Fibulin-5 with HDL particle size could not be replicated in PROCARDIS (n = 3,078) and/or the Women's Genome Health Study (n = 23,170). Conclusions We show that refined HDL phenotyping by NMR spectroscopy can detect known genes of HDL metabolism better than analyses on HDL-C. PMID:21283740

  18. Numerical Technology for Large-Scale Computational Electromagnetics

    SciTech Connect

    Sharpe, R; Champagne, N; White, D; Stowell, M; Adams, R

    2003-01-30

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems are solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.

  19. Large Scale Electronic Structure Calculations using Quantum Chemistry Methods

    NASA Astrophysics Data System (ADS)

    Scuseria, Gustavo E.

    1998-03-01

    This talk will address our recent efforts in developing fast, linear scaling electronic structure methods for large scale applications. Of special importance is our fast multipole method( M. C. Strain, G. E. Scuseria, and M. J. Frisch, Science 271), 51 (1996). (FMM) for achieving linear scaling for the quantum Coulomb problem (GvFMM), the traditional bottleneck in quantum chemistry calculations based on Gaussian orbitals. Fast quadratures(R. E. Stratmann, G. E. Scuseria, and M. J. Frisch, Chem. Phys. Lett. 257), 213 (1996). combined with methods that avoid the Hamiltonian diagonalization( J. M. Millam and G. E. Scuseria, J. Chem. Phys. 106), 5569 (1997) have resulted in density functional theory (DFT) programs that can be applied to systems containing many hundreds of atoms and ---depending on computational resources or level of theory-- to many thousands of atoms.( A. D. Daniels, J. M. Millam and G. E. Scuseria, J. Chem. Phys. 107), 425 (1997). Three solutions for the diagonalization bottleneck will be analyzed and compared: a conjugate gradient density matrix search (CGDMS), a Hamiltonian polynomial expansion of the density matrix, and a pseudo-diagonalization method. Besides DFT, our near-field exchange method( J. C. Burant, G. E. Scuseria, and M. J. Frisch, J. Chem. Phys. 105), 8969 (1996). for linear scaling Hartree-Fock calculations will be discussed. Based on these improved capabilities, we have also developed programs to obtain vibrational frequencies (via analytic energy second derivatives) and excitation energies (through time-dependent DFT) of large molecules like porphyn or C_70. Our GvFMM has been extended to periodic systems( K. N. Kudin and G. E. Scuseria, Chem. Phys. Lett., in press.) and progress towards a Gaussian-based DFT and HF program for polymers and solids will be reported. Last, we will discuss our progress on a Laplace-transformed \\cal O(N^2) second-order pertubation theory (MP2) method.

  20. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  1. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts. PMID:24951697

  2. Understanding the Physical Properties that Control Protein Crystallization by Analysis of Large-Scale Experimental Data

    SciTech Connect

    Price, W.; Chen, Y; Handelman, S; Neely, H; Manor, P; Karlin, R; Nair, R; Montelione, G; Hunt, J; et. al.

    2008-01-01

    Crystallization is the most serious bottleneck in high-throughput protein-structure determination by diffraction methods. We have used data mining of the large-scale experimental results of the Northeast Structural Genomics Consortium and experimental folding studies to characterize the biophysical properties that control protein crystallization. This analysis leads to the conclusion that crystallization propensity depends primarily on the prevalence of well-ordered surface epitopes capable of mediating interprotein interactions and is not strongly influenced by overall thermodynamic stability. We identify specific sequence features that correlate with crystallization propensity and that can be used to estimate the crystallization probability of a given construct. Analyses of entire predicted proteomes demonstrate substantial differences in the amino acid-sequence properties of human versus eubacterial proteins, which likely reflect differences in biophysical properties, including crystallization propensity. Our thermodynamic measurements do not generally support previous claims regarding correlations between sequence properties and protein stability.

  3. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  4. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  5. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  6. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  7. Potential and issues in large scale flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Di Baldassarre, Giuliano; Brandimarte, Luigia; Dottori, Francesco; Mazzoleni, Maurizio; Yan, Kun

    2015-04-01

    The last years have seen a growing research interest on large scale flood inundation modelling. Nowadays, modelling tools and datasets allow for analyzing flooding processes at regional, continental and even global scale with an increasing level of detail. As a result, several research works have already addressed this topic using different methodologies of varying complexity. The potential of these studies is certainly enormous. Large scale flood inundation modelling can provide valuable information in areas where few information and studies were previously available. They can provide a consistent framework for a comprehensive assessment of flooding processes in the river basins of world's large rivers, as well as impacts of future climate scenarios. To make the most of such a potential, we believe it is necessary, on the one hand, to understand strengths and limitations of the existing methodologies, and on the other hand, to discuss possibilities and implications of using large scale flood models for operational flood risk assessment and management. Where should researchers put their effort, in order to develop useful and reliable methodologies and outcomes? How the information coming from large scale flood inundation studies can be used by stakeholders? How should we use this information where previous higher resolution studies exist, or where official studies are available?

  8. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  9. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  10. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  11. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  12. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  13. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  14. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney-Hasegawa-Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  15. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  16. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  17. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  18. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  19. Large-scale data analysis using the Wigner function

    NASA Astrophysics Data System (ADS)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  20. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  1. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable. PMID:25162863

  2. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  3. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  4. Simulation and Analysis of Large-Scale Compton Imaging Detectors

    SciTech Connect

    Manini, H A; Lange, D J; Wright, D M

    2006-12-27

    We perform simulations of two types of large-scale Compton imaging detectors. The first type uses silicon and germanium detector crystals, and the second type uses silicon and CdZnTe (CZT) detector crystals. The simulations use realistic detector geometry and parameters. We analyze the performance of each type of detector, and we present results using receiver operating characteristics (ROC) curves.

  5. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  6. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  7. CACHE Guidelines for Large-Scale Computer Programs.

    ERIC Educational Resources Information Center

    National Academy of Engineering, Washington, DC. Commission on Education.

    The Computer Aids for Chemical Engineering Education (CACHE) guidelines identify desirable features of large-scale computer programs including running cost and running-time limit. Also discussed are programming standards, documentation, program installation, system requirements, program testing, and program distribution. Lists of types of…

  8. The Role of Plausible Values in Large-Scale Surveys

    ERIC Educational Resources Information Center

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  9. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  10. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  11. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  12. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  13. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  14. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  15. Ultra-Fast Sample Preparation for High-Throughput Proteomics

    SciTech Connect

    Lopez-Ferrer, Daniel; Hixson, Kim K.; Belov, Mikhail E.; Smith, Richard D.

    2011-06-21

    Sample preparation oftentimes can be the Achilles Heel of any analytical process and in the field of proteomics, preparing samples for mass spectrometric analysis is no exception. Current goals, concerning proteomic sample preparation on a large scale, include efforts toward improving reproducibility, reducing the time of processing and ultimately the automation of the entire workflow. This chapter reviews an array of recent approaches applied to bottom-up proteomics sample preparation to reduce the processing time down from hours to minutes. The current state-of-the-art in the field uses different energy inputs like microwave, ultrasound or pressure to perform the four basic steps in sample preparation: protein extraction, denaturation, reduction and alkylation, and digestion. No single energy input for enhancement of proteome sample preparation has become the universal gold standard. Instead, a combination of different energy inputs tend to produce the best results. This chapter further describes the future trends in the field such as the hyphenation of sample preparation with downstream detection and analysis systems. Finally, a detailed protocol describing the combined use of both pressure cycling technology and ultrasonic energy inputs to hasten proteomic sample preparation is presented.

  16. Method optimization for proteomic analysis of soybean leaf: Improvements in identification of new and low-abundance proteins

    PubMed Central

    Mesquita, Rosilene Oliveira; de Almeida Soares, Eduardo; de Barros, Everaldo Gonçalves; Loureiro, Marcelo Ehlers

    2012-01-01

    The most critical step in any proteomic study is protein extraction and sample preparation. Better solubilization increases the separation and resolution of gels, allowing identification of a higher number of proteins and more accurate quantitation of differences in gene expression. Despite the existence of published results for the optimization of proteomic analyses of soybean seeds, no comparable data are available for proteomic studies of soybean leaf tissue. In this work we have tested the effects of modification of a TCA-acetone method on the resolution of 2-DE gels of leaves and roots of soybean. Better focusing was obtained when both mercaptoethanol and dithiothreitol were used in the extraction buffer simultaneously. Increasing the number of washes of TCA precipitated protein with acetone, using a final wash with 80% ethanol and using sonication to ressuspend the pellet increased the number of detected proteins as well the resolution of the 2-DE gels. Using this approach we have constructed a soybean protein map. The major group of identified proteins corresponded to genes of unknown function. The second and third most abundant groups of proteins were composed of photosynthesis and metabolism related genes. The resulting protocol improved protein solubility and gel resolution allowing the identification of 122 soybean leaf proteins, 72 of which were not detected in other published soybean leaf 2-DE gel datasets, including a transcription factor and several signaling proteins. PMID:22802721

  17. Platelet proteomics.

    PubMed

    Zufferey, Anne; Fontana, Pierre; Reny, Jean-Luc; Nolli, Severine; Sanchez, Jean-Charles

    2012-01-01

    Platelets are small cell fragments, produced by megakaryocytes, in the bone marrow. They play an important role in hemostasis and diverse thrombotic disorders. They are therefore primary targets of antithrombotic therapies. They are implicated in several pathophysiological pathways, such as inflammation or wound repair. In blood circulation, platelets are activated by several pathways including subendothelial matrix and thrombin, triggering the formation of the platelet plug. Studying their proteome is a powerful approach to understand their biology and function. However, particular attention must be paid to different experimental parameters, such as platelet quality and purity. Several technologies are involved during the platelet proteome processing, yielding information on protein identification, characterization, localization, and quantification. Recent technical improvements in proteomics combined with inter-disciplinary strategies, such as metabolomic, transcriptomics, and bioinformatics, will help to understand platelets biological mechanisms. Therefore, a comprehensive analysis of the platelet proteome under different environmental conditions may contribute to elucidate complex processes relevant to platelet function regarding bleeding disorders or platelet hyperreactivity and identify new targets for antiplatelet therapy.

  18. Use of Time-Resolved Fluorescence To Improve Sensitivity and Dynamic Range of Gel-Based Proteomics.

    PubMed

    Sandberg, AnnSofi; Buschmann, Volker; Kapusta, Peter; Erdmann, Rainer; Wheelock, Åsa M

    2016-03-15

    Limitations in the sensitivity and dynamic range of two-dimensional gel electrophoresis (2-DE) are currently hampering its utility in global proteomics and biomarker discovery applications. In the current study, we present proof-of-concept analyses showing that introducing time-resolved fluorescence in the image acquisition step of in-gel protein quantification provides a sensitive and accurate method for subtracting confounding background fluorescence at the photon level. In-gel protein detection using the minimal difference gel electrophoresis workflow showed improvements in lowest limit of quantification in terms of CyDye molecules per pixel of 330-fold in the blue-green region (Cy2) and 8000-fold in the red region (Cy5) over conventional state-of-the-art image acquisition instrumentation, here represented by the Typhoon 9400 instrument. These improvements make possible the detection of low-abundance proteins present at sub-attomolar levels, thereby representing a quantum leap for the use of gel-based proteomics in biomarker discovery. These improvements were achieved using significantly lower laser powers and overall excitation times, thereby drastically decreasing photobleaching during repeated scanning. The single-fluorochrome detection limits achieved by the cumulative time-resolved emission two-dimensional electrophoresis (CuTEDGE) technology facilitates in-depth proteomics characterization of very scarce samples, for example, primary human tissue materials collected in clinical studies. The unique information provided by high-sensitivity 2-DE, including positional shifts due to post-translational modifications, may increase the chance to detect biomarker signatures of relevance for identification of disease subphenotypes. PMID:26854653

  19. Science and engineering of large scale socio-technical simulations.

    SciTech Connect

    Barrett, C. L.; Eubank, S. G.; Marathe, M. V.; Mortveit, H. S.; Reidys, C. M.

    2001-01-01

    Computer simulation is a computational approach whereby global system properties are produced as dynamics by direct computation of interactions among representations of local system elements. A mathematical theory of simulation consists of an account of the formal properties of sequential evaluation and composition of interdependent local mappings. When certain local mappings and their interdependencies can be related to particular real world objects and interdependencies, it is common to compute the interactions to derive a symbolic model of the global system made up of the corresponding interdependent objects. The formal mathematical and computational account of the simulation provides a particular kind of theoretical explanation of the global system properties and, therefore, insight into how to engineer a complex system to exhibit those properties. This paper considers the methematical foundations and engineering princaples necessary for building large scale simulations of socio-technical systems. Examples of such systems are urban regional transportation systems, the national electrical power markets and grids, the world-wide Internet, vaccine design and deployment, theater war, etc. These systems are composed of large numbers of interacting human, physical and technological components. Some components adapt and learn, exhibit perception, interpretation, reasoning, deception, cooperation and noncooperation, and have economic motives as well as the usual physical properties of interaction. The systems themselves are large and the behavior of sociotechnical systems is tremendously complex. The state of affairs f o r these kinds of systems is characterized by very little satisfactory formal theory, a good decal of very specialized knowledge of subsystems, and a dependence on experience-based practitioners' art. However, these systems are vital and require policy, control, design, implementation and investment. Thus there is motivation to improve the ability to

  20. Stable Isotope Tracers in Large Scale Hydrological Models

    NASA Astrophysics Data System (ADS)

    Fekete, B. M.; Aggarwal, P.

    2004-05-01

    Stable isotopes of oxygen and hydrogen (deuterium and oxygen-18) have been shown to be effective tracers for characterizing hydrological processes in small river basins. Their application in large river basins has lagged behind due to the lack of sufficient isotope data. Recent availability of isotope data from most US rivers and subsequent efforts by the International Atomic Energy Agency (IAEA) to collect comprehensive global information on isotope compositions of river runoff is changing this situation. These data sets offer new opportunities to utilize stable isotopes in studies of large river basins. Recent work carried out jointly by the Water Systems Analysis Group of the University of New Hampshire and the Isotope Hydrology Section of the IAEA applied isotope-enabled global water balance and transport models to assess the feasibility of using isotope data for improving water balance estimations at large scales. The model implemented simple mixing in the various storage pools (e.g. snow pack, soil moisture, groundwater, and river channel) and fractionation during evapotranspiration. Sensitivity tests show that spatial and temporal distributions of isotopes in precipitation and their mixing in the various storage pools are the most important factors affecting the isotopic composition of river discharge. The groundwater storage pool plays a key role in the seasonal dynamics of stable isotope composition of river discharge. Fractionation during phase changes appears to have a less pronounced impact. These findings are consistent with those in small scale catchments where ``old water'' and ``new water'' (i.e. pre-event water and storm runoff) can be easily separated by using isotopes. Model validation using available data from the US rivers showed remarkable performance considering the inconsistencies in the temporal sampling of precipitation and runoff isotope composition records. The good model performance suggests that seasonal variations of the isotopic

  1. Optimization of large-scale pseudotargeted metabolomics method based on liquid chromatography-mass spectrometry.

    PubMed

    Luo, Ping; Yin, Peiyuan; Zhang, Weijian; Zhou, Lina; Lu, Xin; Lin, Xiaohui; Xu, Guowang

    2016-03-11

    Liquid chromatography-mass spectrometry (LC-MS) is now a main stream technique for large-scale metabolic phenotyping to obtain a better understanding of genomic functions. However, repeatability is still an essential issue for the LC-MS based methods, and convincing strategies for long time analysis are urgently required. Our former reported pseudotargeted method which combines nontargeted and targeted analyses, is proved to be a practical approach with high-quality and information-rich data. In this study, we developed a comprehensive strategy based on the pseudotargeted analysis by integrating blank-wash, pooled quality control (QC) sample, and post-calibration for the large-scale metabolomics study. The performance of strategy was optimized from both pre- and post-acquisition sections including the selection of QC samples, insertion frequency of QC samples, and post-calibration methods. These results imply that the pseudotargeted method is rather stable and suitable for large-scale study of metabolic profiling. As a proof of concept, the proposed strategy was applied to the combination of 3 independent batches within a time span of 5 weeks, and generated about 54% of the features with coefficient of variations (CV) below 15%. Moreover, the stability and maximal capability of a single analytical batch could be extended to at least 282 injections (about 110h) while still providing excellent stability, the CV of 63% metabolic features was less than 15%. Taken together, the improved repeatability of our strategy provides a reliable protocol for large-scale metabolomics studies.

  2. Analysis of Large Scale Spatial Variability of Soil Moisture Using a Geostatistical Method

    PubMed Central

    Lakhankar, Tarendra; Jones, Andrew S.; Combs, Cynthia L.; Sengupta, Manajit; Vonder Haar, Thomas H.; Khanbilvardi, Reza

    2010-01-01

    Spatial and temporal soil moisture dynamics are critically needed to improve the parameterization for hydrological and meteorological modeling processes. This study evaluates the statistical spatial structure of large-scale observed and simulated estimates of soil moisture under pre- and post-precipitation event conditions. This large scale variability is a crucial in calibration and validation of large-scale satellite based data assimilation systems. Spatial analysis using geostatistical approaches was used to validate modeled soil moisture by the Agriculture Meteorological (AGRMET) model using in situ measurements of soil moisture from a state-wide environmental monitoring network (Oklahoma Mesonet). The results show that AGRMET data produces larger spatial decorrelation compared to in situ based soil moisture data. The precipitation storms drive the soil moisture spatial structures at large scale, found smaller decorrelation length after precipitation. This study also evaluates the geostatistical approach for mitigation for quality control issues within in situ soil moisture network to estimates at soil moisture at unsampled stations. PMID:22315576

  3. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  4. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  5. Electron drift in a large scale solid xenon

    DOE PAGESBeta

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  6. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  7. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  8. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  9. Lagrangian space consistency relation for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  10. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  11. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  12. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  13. Startup of large-scale projects casts spotlight on IGCC

    SciTech Connect

    Swanekamp, R.

    1996-06-01

    With several large-scale plants cranking up this year, integrated coal gasification/combined cycle (IGCC) appears poised for growth. The technology may eventually help coal reclaim its former prominence in new plant construction, but developers worldwide are eyeing other feedstocks--such as petroleum coke or residual oil. Of the so-called advanced clean-coal technologies, integrated (IGCC) appears to be having a defining year. Of three large-scale demonstration plants in the US, one is well into startup, a second is expected to begin operating in the fall, and a third should startup by the end of the year; worldwide, over a dozen more projects are in the works. In Italy, for example, several large projects using petroleum coke or refinery residues as feedstocks are proceeding, apparently on a project-finance basis.

  14. Considerations of large scale impact and the early Earth

    NASA Technical Reports Server (NTRS)

    Grieve, R. A. F.; Parmentier, E. M.

    1985-01-01

    Bodies which have preserved portions of their earliest crust indicate that large scale impact cratering was an important process in early surface and upper crustal evolution. Large impact basins form the basic topographic, tectonic, and stratigraphic framework of the Moon and impact was responsible for the characteristics of the second order gravity field and upper crustal seismic properties. The Earth's crustal evolution during the first 800 my of its history is conjectural. The lack of a very early crust may indicate that thermal and mechanical instabilities resulting from intense mantle convection and/or bombardment inhibited crustal preservation. Whatever the case, the potential effects of large scale impact have to be considered in models of early Earth evolution. Preliminary models of the evolution of a large terrestrial impact basin was derived and discussed in detail.

  15. A Calibration Routine for Efficient ETD in Large-Scale Proteomics

    NASA Astrophysics Data System (ADS)

    Rose, Christopher M.; Rush, Matthew J. P.; Riley, Nicholas M.; Merrill, Anna E.; Kwiecien, Nicholas W.; Holden, Dustin D.; Mullen, Christopher; Westphall, Michael S.; Coon, Joshua J.

    2015-11-01

    Electron transfer dissociation (ETD) has been broadly adopted and is now available on a variety of commercial mass spectrometers. Unlike collisional activation techniques, optimal performance of ETD requires considerable user knowledge and input. ETD reaction duration is one key parameter that can greatly influence spectral quality and overall experiment outcome. We describe a calibration routine that determines the correct number of reagent anions necessary to reach a defined ETD reaction rate. Implementation of this automated calibration routine on two hybrid Orbitrap platforms illustrate considerable advantages, namely, increased product ion yield with concomitant reduction in scan rates netting up to 75% more unique peptide identifications in a shotgun experiment.

  16. The large-scale anisotropy with the PAMELA calorimeter

    NASA Astrophysics Data System (ADS)

    Karelin, A.; Adriani, O.; Barbarino, G.; Bazilevskaya, G.; Bellotti, R.; Boezio, M.; Bogomolov, E.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carbone, R.; Carlson, P.; Casolino, M.; Castellini, G.; De Donato, C.; De Santis, C.; De Simone, N.; Di Felice, V.; Formato, V.; Galper, A.; Koldashov, S.; Koldobskiy, S.; Krut'kov, S.; Kvashnin, A.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A.; Menn, W.; Mergé, M.; Mikhailov, V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Palma, F.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S.; Sarkar, R.; Simon, M.; Scotti, V.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S.; Yurkin, Y.; Zampa, G.; Zampa, N.

    2015-10-01

    The large-scale anisotropy (or the so-called star-diurnal wave) has been studied using the calorimeter of the space-born experiment PAMELA. The cosmic ray anisotropy has been obtained for the Southern and Northern hemispheres simultaneously in the equatorial coordinate system for the time period 2006-2014. The dipole amplitude and phase have been measured for energies 1-20 TeV n-1.

  17. Report on large scale molten core/magnesia interaction test

    SciTech Connect

    Chu, T.Y.; Bentz, J.H.; Arellano, F.E.; Brockmann, J.E.; Field, M.E.; Fish, J.D.

    1984-08-01

    A molten core/material interaction experiment was performed at the Large-Scale Melt Facility at Sandia National Laboratories. The experiment involved the release of 230 kg of core melt, heated to 2923/sup 0/K, into a magnesia brick crucible. Descriptions of the facility, the melting technology, as well as results of the experiment, are presented. Preliminary evaluations of the results indicate that magnesia brick can be a suitable material for core ladle construction.

  18. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  19. Simulating Weak Lensing by Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vale, Chris; White, Martin

    2003-08-01

    We model weak gravitational lensing of light by large-scale structure using ray tracing through N-body simulations. The method is described with particular attention paid to numerical convergence. We investigate some of the key approximations in the multiplane ray-tracing algorithm. Our simulated shear and convergence maps are used to explore how well standard assumptions about weak lensing hold, especially near large peaks in the lensing signal.

  20. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  1. The Large-scale Structure of Scientific Method

    NASA Astrophysics Data System (ADS)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of scientific method can reveal the global interconnectedness of scientific knowledge that is an essential part of what makes science scientific.

  2. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  3. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  4. Multivariate Clustering of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  5. Multivariate Clustering of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  6. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  7. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  8. Large-scale Alfvén vortices

    NASA Astrophysics Data System (ADS)

    Onishchenko, O. G.; Pokhotelov, O. A.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-01

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  9. Turbulent large-scale structure effects on wake meandering

    NASA Astrophysics Data System (ADS)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  10. A Cloud Computing Platform for Large-Scale Forensic Computing

    NASA Astrophysics Data System (ADS)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  11. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  12. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  13. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host.

  14. Large scale anisotropy of UHECRs for the Telescope Array

    SciTech Connect

    Kido, E.

    2011-09-22

    The origin of Ultra High Energy Cosmic Rays (UHECRs) is one of the most interesting questions in astroparticle physics. Despite of the efforts by other previous measurements, there is no consensus of both of the origin and the mechanism of UHECRs generation and propagation yet. In this context, Telescope Array (TA) experiment is expected to play an important role as the largest detector in the northern hemisphere which consists of an array of surface particle detectors (SDs) and fluorescence detectors (FDs) and other important calibration devices. We searched for large scale anisotropy using SD data of TA. UHECRs are expected to be restricted in GZK horizon when the composition of UHECRs is proton, so the observed arrival directions are expected to exhibit local large scale anisotropy if UHECR sources are some astrophysical objects. We used the SD data set from 11 May 2008 to 7 September 2010 to search for large-scale anisotropy. The discrimination power between LSS and isotropy is not enough yet, but the statistics in TA is expected to discriminate between those in about 95% confidence level on average in near future.

  15. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  16. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, C.P.; Olden, J.D.; Lytle, D.A.; Melis, T.S.; Schmidt, J.C.; Bray, E.N.; Freeman, Mary C.; Gido, K.B.; Hemphill, N.P.; Kennard, M.J.; McMullen, L.E.; Mims, M.C.; Pyron, M.; Robinson, C.T.; Williams, J.G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems. ?? 2011 by American Institute of Biological Sciences. All rights reserved.

  17. A visualization framework for large-scale virtual astronomy

    NASA Astrophysics Data System (ADS)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  18. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  19. Impact of Large-scale Geological Architectures On Recharge

    NASA Astrophysics Data System (ADS)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  20. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  1. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  2. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  3. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  4. Exogenous melatonin improves corn (Zea mays L.) embryo proteome in seeds subjected to chilling stress.

    PubMed

    Kołodziejczyk, Izabela; Dzitko, Katarzyna; Szewczyk, Rafał; Posmyk, Małgorzata M

    2016-04-01

    Melatonin (MEL; N-acetyl-5-methoxytryptamine) plays an important role in plant stress defense. Various plant species rich in this indoleamine have shown a higher capacity for stress tolerance. Moreover, it has great potential for plant biostimulation, is biodegradable and non-toxic for the environment. All this indicates that our concept of seed enrichment with exogenous MEL is justified. This work concerns the effects of corn (Zea mays L.) seed pre-sowing treatments supplemented with MEL. Non-treated seeds (nt), and those hydroprimed with water (H) or with MEL solutions 50 and 500 μM (HMel50, HMel500) were compared. Positive effects of seed priming are particularly apparent during germination under suboptimal conditions. The impact of MEL applied by priming on seed protein profiles during imbibition/germination at low temperature has not been investigated to date. In order to identify changes in the corn seed proteome after applying hydropriming techniques, purified protein extracts of chilling stressed seed embryos (14 days, 5°C) were separated by two-dimensional electrophoresis. Then proteome maps were graphically and statistically compared and selected protein spots were qualitatively analyzed using mass spectrometry techniques and identified. This study aimed to analyze the priming-induced changes in maize embryo proteome and at identifying priming-associated and MEL-associated proteins in maize seeds subjected to chilling. We attempt to explain how MEL expands plant capacity for stress tolerance.

  5. Total Soluble Protein Extraction for Improved Proteomic Analysis of Transgenic Rice Plant Roots.

    PubMed

    Raorane, Manish L; Narciso, Joan O; Kohli, Ajay

    2016-01-01

    With the advent of high-throughput platforms, proteomics has become a powerful tool to search for plant gene products of agronomic relevance. Protein extractions using multistep protocols have been shown to be effective to achieve better proteome profiles than simple, single-step extractions. These protocols are generally efficient for above ground tissues such as leaves. However, each step leads to loss of some amount of proteins. Additionally, compounds such as proteases in the plant tissues lead to protein degradation. While protease inhibitor cocktails are available, these alone do not seem to suffice when roots are included in the plant sample. This is obvious given the lack of high molecular weight (HMW) proteins obtained from samples that include root tissue. For protein/proteome analysis of transgenic plant roots or of seedlings, which include root tissue, such pronounced protein degradation is especially undesirable. A facile protein extraction protocol is presented, which ensures that despite the inclusion of root tissues there is minimal loss in total protein components.

  6. Exogenous melatonin improves corn (Zea mays L.) embryo proteome in seeds subjected to chilling stress.

    PubMed

    Kołodziejczyk, Izabela; Dzitko, Katarzyna; Szewczyk, Rafał; Posmyk, Małgorzata M

    2016-04-01

    Melatonin (MEL; N-acetyl-5-methoxytryptamine) plays an important role in plant stress defense. Various plant species rich in this indoleamine have shown a higher capacity for stress tolerance. Moreover, it has great potential for plant biostimulation, is biodegradable and non-toxic for the environment. All this indicates that our concept of seed enrichment with exogenous MEL is justified. This work concerns the effects of corn (Zea mays L.) seed pre-sowing treatments supplemented with MEL. Non-treated seeds (nt), and those hydroprimed with water (H) or with MEL solutions 50 and 500 μM (HMel50, HMel500) were compared. Positive effects of seed priming are particularly apparent during germination under suboptimal conditions. The impact of MEL applied by priming on seed protein profiles during imbibition/germination at low temperature has not been investigated to date. In order to identify changes in the corn seed proteome after applying hydropriming techniques, purified protein extracts of chilling stressed seed embryos (14 days, 5°C) were separated by two-dimensional electrophoresis. Then proteome maps were graphically and statistically compared and selected protein spots were qualitatively analyzed using mass spectrometry techniques and identified. This study aimed to analyze the priming-induced changes in maize embryo proteome and at identifying priming-associated and MEL-associated proteins in maize seeds subjected to chilling. We attempt to explain how MEL expands plant capacity for stress tolerance. PMID:26945210

  7. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  8. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  9. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  10. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    PubMed

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  11. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    NASA Technical Reports Server (NTRS)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; Jiang, Jonathan H.

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  12. Ultra-large-scale Cosmology in Next-generation Experiments with Single Tracers

    NASA Astrophysics Data System (ADS)

    Alonso, David; Bull, Philip; Ferreira, Pedro G.; Maartens, Roy; Santos, Mário G.

    2015-12-01

    Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for the most relevant future large-scale structure experiments: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and radio continuum surveys. Our forecasts show that next-generation experiments, reaching out to redshifts z≃ 4, will not be able to detect previously undetected general-relativistic effects by using individual tracers of the density field, although the contribution of weak lensing magnification on large scales should be clearly detectable. We also perform a rigorous joint forecast for the detection of primordial non-Gaussianity through the excess power it produces in the clustering of biased tracers on large scales, finding that uncertainties of σ ({f}{{NL}})∼ 1-2 should be achievable. We study the level of degeneracy of these large-scale effects with several tracer-dependent nuisance parameters, quantifying the minimal priors on the latter that are needed for an optimal measurement of the former. Finally, we discuss the systematic effects that must be mitigated to achieve this level of sensitivity, and some alternative approaches that should help to improve the constraints. The computational tools developed to carry out this study, which requires the full-sky computation of the theoretical angular power spectra for {O}(100) redshift bins, as well as realistic models of the luminosity function, are publicly available at http://intensitymapping.physics.ox.ac.uk/codes.html.

  13. Leveraging Proteomics to Understand Plant–Microbe Interactions

    PubMed Central

    Jayaraman, Dhileepkumar; Forshey, Kari L.; Grimsrud, Paul A.; Ané, Jean-Michel

    2012-01-01

    Understanding the interactions of plants with beneficial and pathogenic microbes is a promising avenue to improve crop productivity and agriculture sustainability. Proteomic techniques provide a unique angle to describe these intricate interactions and test hypotheses. The various approaches for proteomic analysis generally include protein/peptide separation and identification, but can also provide quantification and the characterization of post-translational modifications. In this review, we discuss how these techniques have been applied to the study of plant–microbe interactions. We also present some areas where this field of study would benefit from the utilization of newly developed methods that overcome previous limitations. Finally, we reinforce the need for expanding, integrating, and curating protein databases, as well as the benefits of combining protein-level datasets with those from genetic analyses and other high-throughput large-scale approaches for a systems-level view of plant–microbe interactions. PMID:22645586

  14. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  15. Advanced proteomic liquid chromatography

    SciTech Connect

    Xie, Fang; Smith, Richard D.; Shen, Yufeng

    2012-10-26

    Liquid chromatography coupled with mass spectrometry is the predominant platform used to analyze proteomics samples consisting of large numbers of proteins and their proteolytic products (e.g., truncated polypeptides) and spanning a wide range of relative concentrations. This review provides an overview of advanced capillary liquid chromatography techniques and methodologies that greatly improve separation resolving power and proteomics analysis coverage, sensitivity, and throughput.

  16. Proteomics in Rheumatoid Arthritis Research

    PubMed Central

    Park, Yune-Jung; Chung, Min Kyung; Hwang, Daehee

    2015-01-01

    Although rheumatoid arthritis (RA) is the most common chronic inflammatory autoimmune disease, diagnosis of RA is currently based on clinical manifestations, and there is no simple, practical assessment tool in the clinical field to assess disease activity and severity. Recently, there has been increasing interest in the discovery of new diagnostic RA biomarkers that can assist in evaluating disease activity, severity, and treatment response. Proteomics, the large-scale study of the proteome, has emerged as a powerful technique for protein identification and characterization. For the past 10 years, proteomic techniques have been applied to different biological samples (synovial tissue/fluid, blood, and urine) from RA patients and experimental animal models. In this review, we summarize the current state of the application of proteomics in RA and its importance in identifying biomarkers and treatment targets. PMID:26330803

  17. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  18. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  19. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  20. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  1. Statistical analysis of large-scale neuronal recording data

    PubMed Central

    Reed, Jamie L.; Kaas, Jon H.

    2010-01-01

    Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395

  2. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  3. Infectious diseases in large-scale cat hoarding investigations.

    PubMed

    Polak, K C; Levy, J K; Crawford, P C; Leutenegger, C M; Moriello, K A

    2014-08-01

    Animal hoarders accumulate animals in over-crowded conditions without adequate nutrition, sanitation, and veterinary care. As a result, animals rescued from hoarding frequently have a variety of medical conditions including respiratory infections, gastrointestinal disease, parasitism, malnutrition, and other evidence of neglect. The purpose of this study was to characterize the infectious diseases carried by clinically affected cats and to determine the prevalence of retroviral infections among cats in large-scale cat hoarding investigations. Records were reviewed retrospectively from four large-scale seizures of cats from failed sanctuaries from November 2009 through March 2012. The number of cats seized in each case ranged from 387 to 697. Cats were screened for feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) in all four cases and for dermatophytosis in one case. A subset of cats exhibiting signs of upper respiratory disease or diarrhea had been tested for infections by PCR and fecal flotation for treatment planning. Mycoplasma felis (78%), calicivirus (78%), and Streptococcus equi subspecies zooepidemicus (55%) were the most common respiratory infections. Feline enteric coronavirus (88%), Giardia (56%), Clostridium perfringens (49%), and Tritrichomonas foetus (39%) were most common in cats with diarrhea. The seroprevalence of FeLV and FIV were 8% and 8%, respectively. In the one case in which cats with lesions suspicious for dermatophytosis were cultured for Microsporum canis, 69/76 lesional cats were culture-positive; of these, half were believed to be truly infected and half were believed to be fomite carriers. Cats from large-scale hoarding cases had high risk for enteric and respiratory infections, retroviruses, and dermatophytosis. Case responders should be prepared for mass treatment of infectious diseases and should implement protocols to prevent transmission of feline or zoonotic infections during the emergency response and when

  4. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    NASA Astrophysics Data System (ADS)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  5. Large-scale molten core/material interaction experiments

    SciTech Connect

    Chu, T.Y.

    1984-01-01

    The paper described the facility and melting technology for large-scale molten core/material interaction experiments being carried out at Sandia National Laboratories. The facility is largest of its kind anywhere. It is capable of producing core melts up to 500 kg at a temperature of 3000/sup 0/K. Results of a recent experiment involving the release of 230 kg of core melt into a magnesia brick crucible is discussed in detail. Data on thermal and mechanical responses of magnesia brick, heat flux partitioning, melt penetration, gas and aerosol generation are presented.

  6. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    NASA Astrophysics Data System (ADS)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  7. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill.

  8. Locally Biased Galaxy Formation and Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Narayanan, Vijay K.; Berlind, Andreas A.; Weinberg, David H.

    2000-01-01

    We examine the influence of the morphology-density relation and a wide range of simple models for biased galaxy formation on statistical measures of large-scale structure. We contrast the behavior of local biasing models, in which the efficiency of galaxy formation is determined by the density, geometry, or velocity dispersion of the local mass distribution, with that of nonlocal biasing models, in which galaxy formation is modulated coherently over scales larger than the galaxy correlation length. If morphological segregation of galaxies is governed by a local morphology-density relation, then the correlation function of E/S0 galaxies should be steeper and stronger than that of spiral galaxies on small scales, as observed, while on large scales the E/S0 and spiral galaxies should have correlation functions with the same shape but different amplitudes. Similarly, all of our local bias models produce scale-independent amplification of the correlation function and power spectrum in the linear and mildly nonlinear regimes; only a nonlocal biasing mechanism can alter the shape of the power spectrum on large scales. Moments of the biased galaxy distribution retain the hierarchical pattern of the mass moments, but biasing alters the values and scale dependence of the hierarchical amplitudes S3 and S4. Pair-weighted moments of the galaxy velocity distribution are sensitive to the details of the bias prescription even if galaxies have the same local velocity distribution as the underlying dark matter. The nonlinearity of the relation between galaxy density and mass density depends on the biasing prescription and the smoothing scale, and the scatter in this relation is a useful diagnostic of the physical parameters that determine the bias. While the assumption that galaxy formation is governed by local physics leads to some important simplifications on large scales, even local biasing is a multifaceted phenomenon whose impact cannot be described by a single parameter or

  9. Towards large scale production and separation of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Alvarez, Noe T.

    Since their discovery, carbon nanotubes (CNTs) have boosted the research and applications of nanotechnology; however, many applications of CNTs are inaccessible because they depend upon large-scale CNT production and separations. Type, chirality and diameter control of CNTs determine many of their physical properties, and such control is still not accesible. This thesis studies the fundamentals for scalable selective reactions of HiPCo CNTs as well as the early phase of routes to an inexpensive approach for large-scale CNT production. In the growth part, this thesis covers a complete wet-chemistry process of catalyst and catalyst support deposition for growth of vertically aligned (VA) CNTs. A wet-chemistry preparation process has significant importance for CNT synthesis through chemical vapor deposition (CVD). CVD is by far, the most suitable and inexpensive process for large-scale CNT production when compared to other common processes such as laser ablation and arc discharge. However, its potential has been limited by low-yielding and difficult preparation processes of catalyst and its support, therefore its competitiveness has been reduced. The wet-chemistry process takes advantage of current nanoparticle technology to deposit the catalyst and the catalyst support as a thin film of nanoparticles, making the protocol simple compared to electron beam evaporation and sputtering processes. In the CNT selective reactions part, this thesis studies UV irradiation of individually dispersed HiPCo CNTs that generates auto-selective reactions in the liquid phase with good control over their diameter and chirality. This technique is ideal for large-scale and continuous-process of separations of CNTs by diameter and type. Additionally, an innovative simple catalyst deposition through abrasion is demonstrated. Simple friction between the catalyst and the substrates deposit a high enough density of metal catalyst particles for successful CNT growth. This simple approach has

  10. Novel algorithm of large-scale simultaneous linear equations.

    PubMed

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-02-24

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  11. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  12. Generation of Large-Scale Winds in Horizontally Anisotropic Convection.

    PubMed

    von Hardenberg, J; Goluskin, D; Provenzale, A; Spiegel, E A

    2015-09-25

    We simulate three-dimensional, horizontally periodic Rayleigh-Bénard convection, confined between free-slip horizontal plates and rotating about a distant horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind. PMID:26451558

  13. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  14. Search for Large Scale Anisotropies with the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Bonino, R.; Pierre Auger Collaboration

    The Pierre Auger Observatory studies the nature and the origin of Ultra High Energy Cosmic Rays (>3\\cdot1018 eV). Completed at the end of 2008, it has been continuously operating for more than six years. Using data collected from 1 January 2004 until 31 March 2009, we search for large scale anisotropies with two complementary analyses in different energy windows. No significant anisotropies are observed, resulting in bounds on the first harmonic amplitude at the 1% level at EeV energies.

  15. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  16. Evaluation of uncertainty in large-scale fusion metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Fumin; Qu, Xinghua; Wu, Hongyan; Ye, Shenghua

    2008-12-01

    The expression system of uncertainty in conventional scale has been perfect, however, due to varies of error sources, it is still hard to obtain the uncertainty of large-scale instruments by common methods. In this paper, the uncertainty is evaluated by Monte Carlo simulation. The point-clouds created by this method are shown through computer visualization and point by point analysis is made. Thus, in fusion measurement, apart from the uncertainty of every instrument being expressed directly, the contribution every error source making for the whole uncertainty becomes easy to calculate. Finally, the application of this method in measuring tunnel component is given.

  17. Large-scale sodium spray fire code validation (SOFICOV) test

    SciTech Connect

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

  18. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  19. Large-Scale Compton Imaging for Wide-Area Surveillance

    SciTech Connect

    Lange, D J; Manini, H A; Wright, D M

    2006-03-01

    We study the performance of a large-scale Compton imaging detector placed in a low-flying aircraft, used to search wide areas for rad/nuc threat sources. In this paper we investigate the performance potential of equipping aerial platforms with gamma-ray detectors that have photon sensitivity up to a few MeV. We simulate the detector performance, and present receiver operating characteristics (ROC) curves for a benchmark scenario using a {sup 137}Cs source. The analysis uses a realistic environmental background energy spectrum and includes air attenuation.

  20. Frequency domain multiplexing for large-scale bolometer arrays

    SciTech Connect

    Spieler, Helmuth

    2002-05-31

    The development of planar fabrication techniques for superconducting transition-edge sensors has brought large-scale arrays of 1000 pixels or more to the realm of practicality. This raises the problem of reading out a large number of sensors with a tractable number of connections. A possible solution is frequency-domain multiplexing. I summarize basic principles, present various circuit topologies, and discuss design trade-offs, noise performance, cross-talk and dynamic range. The design of a practical device and its readout system is described with a discussion of fabrication issues, practical limits and future prospects.

  1. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-07-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  2. Radiative shocks on large scale lasers. Preliminary results

    NASA Astrophysics Data System (ADS)

    Leygnac, S.; Bouquet, S.; Stehle, C.; Barroso, P.; Batani, D.; Benuzzi, A.; Cathala, B.; Chièze, J.-P.; Fleury, X.; Grandjouan, N.; Grenier, J.; Hall, T.; Henry, E.; Koenig, M.; Lafon, J. P. J.; Malka, V.; Marchet, B.; Merdji, H.; Michaut, C.; Poles, L.; Thais, F.

    2001-05-01

    Radiative shocks, those structure is strongly influenced by the radiation field, are present in various astrophysical objects (circumstellar envelopes of variable stars, supernovae ...). Their modeling is very difficult and thus will take benefit from experimental informations. This approach is now possible using large scale lasers. Preliminary experiments have been performed with the nanosecond LULI laser at Ecole Polytechnique (France) in 2000. A radiative shock has been obtained in a low pressure xenon cell. The preparation of such experiments and their interpretation is performed using analytical calculations and numerical simulations.

  3. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  4. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  5. [National Strategic Promotion for Large-Scale Clinical Cancer Research].

    PubMed

    Toyama, Senya

    2016-04-01

    The number of clinical research by clinical cancer study groups has been decreasing this year in Japan. They say the reason is the abolition of donations to the groups from the pharmaceutical companies after the Diovan scandal. But I suppose fundamental problem is that government-supported large-scale clinical cancer study system for evidence based medicine (EBM) has not been fully established. An urgent establishment of the system based on the national strategy is needed for the cancer patients and the public health promotion.

  6. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  7. Large-scale deformation associated with ridge subduction

    USGS Publications Warehouse

    Geist, E.L.; Fisher, M.A.; Scholl, D. W.

    1993-01-01

    Continuum models are used to investigate the large-scale deformation associated with the subduction of aseismic ridges. Formulated in the horizontal plane using thin viscous sheet theory, these models measure the horizontal transmission of stress through the arc lithosphere accompanying ridge subduction. Modelling was used to compare the Tonga arc and Louisville ridge collision with the New Hebrides arc and d'Entrecasteaux ridge collision, which have disparate arc-ridge intersection speeds but otherwise similar characteristics. Models of both systems indicate that diffuse deformation (low values of the effective stress-strain exponent n) are required to explain the observed deformation. -from Authors

  8. Large-Scale Purification of Peroxisomes for Preparative Applications.

    PubMed

    Cramer, Jana; Effelsberg, Daniel; Girzalsky, Wolfgang; Erdmann, Ralf

    2015-09-01

    This protocol is designed for large-scale isolation of highly purified peroxisomes from Saccharomyces cerevisiae using two consecutive density gradient centrifugations. Instructions are provided for harvesting up to 60 g of oleic acid-induced yeast cells for the preparation of spheroplasts and generation of organellar pellets (OPs) enriched in peroxisomes and mitochondria. The OPs are loaded onto eight continuous 36%-68% (w/v) sucrose gradients. After centrifugation, the peak peroxisomal fractions are determined by measurement of catalase activity. These fractions are subsequently pooled and subjected to a second density gradient centrifugation using 20%-40% (w/v) Nycodenz. PMID:26330621

  9. Use of proteomic methods in the analysis of human body fluids in Alzheimer research.

    PubMed

    Zürbig, Petra; Jahn, Holger

    2012-12-01

    Proteomics is the study of the entire population of proteins and peptides in an organism or a part of it, such as a cell, tissue, or fluids like cerebrospinal fluid, plasma, serum, urine, or saliva. It is widely assumed that changes in the composition of the proteome may reflect disease states and provide clues to its origin, eventually leading to targets for new treatments. The ability to perform large-scale proteomic studies now is based jointly on recent advances in our analytical methods. Separation techniques like CE and 2DE have developed and matured. Detection methods like MS have also improved greatly in the last 5 years. These developments have also driven the fields of bioinformatics, needed to deal with the increased data production and systems biology. All these developing methods offer specific advantages but also come with certain limitations. This review describes the different proteomic methods used in the field, their limitations, and their possible pitfalls. Based on a literature search in PubMed, we identified 112 studies that applied proteomic techniques to identify biomarkers for Alzheimer disease. This review describes the results of these studies on proteome changes in human body fluids of Alzheimer patients reviewing the most important studies. We extracted a list of 366 proteins and peptides that were identified by these studies as potential targets in Alzheimer research.

  10. Large-scale solar magnetic fields and H-alpha patterns

    NASA Technical Reports Server (NTRS)

    Mcintosh, P. S.

    1972-01-01

    Coronal and interplanetary magnetic fields computed from measurements of large-scale photospheric magnetic fields suffer from interruptions in day-to-day observations and the limitation of using only measurements made near the solar central meridian. Procedures were devised for inferring the lines of polarity reversal from H-alpha solar patrol photographs that map the same large-scale features found on Mt. Wilson magnetograms. These features may be monitored without interruption by combining observations from the global network of observatories associated with NOAA's Space Environment Services Center. The patterns of inferred magnetic fields may be followed accurately as far as 60 deg from central meridian. Such patterns will be used to improve predictions of coronal features during the next solar eclipse.

  11. Oscillatory barrier-assisted Langmuir-Blodgett deposition of large-scale quantum dot monolayers

    NASA Astrophysics Data System (ADS)

    Xu, Shicheng; Dadlani, Anup L.; Acharya, Shinjita; Schindler, Peter; Prinz, Fritz B.

    2016-03-01

    Depositing continuous, large-scale quantum dot films with low pinhole density is an inevitable but nontrivial step for studying their properties for applications in catalysis, electronic devices, and optoelectronics. This rising interest in high-quality quantum dot films has provided research impetus to improve the deposition technique. We show that by incorporating oscillatory barriers in the commonly used Langmuir-Blodgett method, large-scale monolayers of quantum dots with full coverage up to several millimeters have been achieved. With assistance of perturbation provided by the oscillatory barriers, the film has been shown to relax towards thermal equilibrium, and this physical process has been supported by molecular dynamics simulation. In addition, time evolution of dilatational moduli has been shown to give a clear indication of the film morphology and its stability.

  12. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  13. Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996

    SciTech Connect

    Walker, H.F.

    1997-09-01

    The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.

  14. Insights into large-scale cell-culture reactors: I. Liquid mixing and oxygen supply.

    PubMed

    Sieblist, Christian; Jenzsch, Marco; Pohlscheidt, Michael; Lübbert, Andreas

    2011-12-01

    In the pharmaceutical industry, it is state of the art to produce recombinant proteins and antibodies with animal-cell cultures using bioreactors with volumes of up to 20 m(3) . Recent guidelines and position papers for the industry by the US FDA and the European Medicines Agency stress the necessity of mechanistic insights into large-scale bioreactors. A detailed mechanistic view of their practically relevant subsystems is required as well as their mutual interactions, i.e., mixing or homogenization of the culture broth and sufficient mass and heat transfer. In large-scale bioreactors for animal-cell cultures, different agitation systems are employed. Here, we discuss details of the flows induced in stirred tank reactors relevant for animal-cell cultures. In addition, solutions of the governing fluid dynamic equations obtained with the so-called computational fluid dynamics are presented. Experimental data obtained with improved measurement techniques are shown. The results are compared to previous studies and it is found that they support current hypotheses or models. Progress in improving insights requires continuous interactions between more accurate measurements and physical models. The paper aims at promoting the basic mechanistic understanding of transport phenomena that are crucial for large-scale animal-cell culture reactors.

  15. Insights into large-scale cell-culture reactors: I. Liquid mixing and oxygen supply.

    PubMed

    Sieblist, Christian; Jenzsch, Marco; Pohlscheidt, Michael; Lübbert, Andreas

    2011-12-01

    In the pharmaceutical industry, it is state of the art to produce recombinant proteins and antibodies with animal-cell cultures using bioreactors with volumes of up to 20 m(3) . Recent guidelines and position papers for the industry by the US FDA and the European Medicines Agency stress the necessity of mechanistic insights into large-scale bioreactors. A detailed mechanistic view of their practically relevant subsystems is required as well as their mutual interactions, i.e., mixing or homogenization of the culture broth and sufficient mass and heat transfer. In large-scale bioreactors for animal-cell cultures, different agitation systems are employed. Here, we discuss details of the flows induced in stirred tank reactors relevant for animal-cell cultures. In addition, solutions of the governing fluid dynamic equations obtained with the so-called computational fluid dynamics are presented. Experimental data obtained with improved measurement techniques are shown. The results are compared to previous studies and it is found that they support current hypotheses or models. Progress in improving insights requires continuous interactions between more accurate measurements and physical models. The paper aims at promoting the basic mechanistic understanding of transport phenomena that are crucial for large-scale animal-cell culture reactors. PMID:21818860

  16. A Bayesian Integration Model of High-Throughput Proteomics and Metabolomics Data for Improved Early Detection of Microbial Infections

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; McCue, Lee Ann; Beagley, Nathaniel; McDermott, Jason E.; Wunschel, David S.; Varnum, Susan M.; Hu, Jian Z.; Isern, Nancy G.; Buchko, Garry W.; Mcateer, Kathleen; Pounds, Joel G.; Skerret, Shawn J.; Liggitt, Denny; Frevert, Charles W.

    2009-03-01

    High-throughput (HTP) technologies offer the capability to evaluate the genome, proteome, and metabolome of organisms at a global scale. This opens up new opportunities to define complex signatures of disease that involve signals from multiple types of biomolecules. Integrating these data types however is difficult due to the heterogeneity of the data. We present a Bayesian approach to integration that uses posterior probabilities to assign class memberships to samples using individual and multiple data sources; these probabilities are based on lower level likelihood functions derived from standard statistical learning algorithms. We demonstrate this approach on microbial infections of mice, where the bronchial alveolar lavage fluid was analyzed by two HTP proteomic and one HTP metabolomic technologies. We demonstrate that integration of the three datasets improves classification accuracy to ~89% from the best individual dataset at ~83%. In addition, we present a new visualization tool called Visual Integration for Bayesian Evaluation (VIBE) that allows the user to observe classification accuracies at the class level and evaluate classification accuracies on any subset of available data types based on the posterior probability models defined for the individual and integrated data.

  17. Dealing with missing values in large-scale studies: microarray data imputation and beyond.

    PubMed

    Aittokallio, Tero

    2010-03-01

    High-throughput biotechnologies, such as gene expression microarrays or mass-spectrometry-based proteomic assays, suffer from frequent missing values due to various experimental reasons. Since the missing data points can hinder downstream analyses, there exists a wide variety of ways in which to deal with missing values in large-scale data sets. Nowadays, it has become routine to estimate (or impute) the missing values prior to the actual data analysis. After nearly a decade since the publication of the first missing value imputation methods for gene expression microarray data, new imputation approaches are still being developed at an increasing rate. However, what is lagging behind is a systematic and objective evaluation of the strengths and weaknesses of the different approaches when faced with different types of data sets and experimental questions. In this review, the present strategies for missing value imputation and the measures for evaluating their performance are described. The imputation methods are first reviewed in the context of gene expression microarray data, since most of the methods have been developed for estimating gene expression levels; then, we turn to other large-scale data sets that also suffer from the problems posed by missing values, together with pointers to possible imputation approaches in these settings. Along with a description of the basic principles behind the different imputation approaches, the review tries to provide practical guidance for the users of high-throughput technologies on how to choose the imputation tool for their data and questions, and some additional research directions for the developers of imputation methodologies. PMID:19965979

  18. Large-scale Direct Targeting for Drug Repositioning and Discovery.

    PubMed

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug's affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery.

  19. Alignment of quasar polarizations with large-scale structures

    NASA Astrophysics Data System (ADS)

    Hutsemékers, D.; Braibant, L.; Pelgrims, V.; Sluse, D.

    2014-12-01

    We have measured the optical linear polarization of quasars belonging to Gpc scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is on the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 092.A-0221.Table 1 is available in electronic form at http://www.aanda.org

  20. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  1. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  2. Large scale floodplain mapping using a hydrogeomorphic method

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Yan, K.; Di Baldassarre, G.; Grimaldi, S.

    2013-12-01

    Floodplain landforms are clearly distinguishable as respect to adjacent hillslopes being the trace of severe floods that shaped the terrain. As a result digital topography intrinsically contains the floodplain information, this works presents the results of the application of a DEM-based large scale hydrogeomorphic floodplain delineation method. The proposed approach, based on the integration of terrain analysis algorithms in a GIS framework, automatically identifies the potentially frequently saturated zones of riparian areas by analysing the maximum flood flow heights associated to stream network nodes as respect to surrounding uplands. Flow heights are estimated by imposing a Leopold's law that scales with the contributing area. Presented case studies include the floodplain map of large river basins for the entire Italian territory , that are also used for calibrating the Leopold scaling parameters, as well as additional large international river basins in different climatic and geomorphic characteristics posing the base for the use of such approach for global floodplain mapping. The proposed tool could be useful to detect the hydrological change since it can easily provide maps to verify the flood impact on human activities and vice versa how the human activities changed in floodplain areas at large scale.

  3. Large-scale network-level processes during entrainment

    PubMed Central

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-01-01

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  4. Large-scale network-level processes during entrainment.

    PubMed

    Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan

    2016-03-15

    Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4-30 Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band "disconnecting" visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30 Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs. PMID:26835557

  5. Exploring Cloud Computing for Large-scale Scientific Applications

    SciTech Connect

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  6. Very sparse LSSVM reductions for large-scale data.

    PubMed

    Mall, Raghvendra; Suykens, Johan A K

    2015-05-01

    Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of L0 -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

  7. Large-scale anisotropy in stably stratified rotating flows.

    PubMed

    Marino, R; Mininni, P D; Rosenberg, D L; Pouquet, A

    2014-08-01

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to 1024(3) grid points and Reynolds numbers of ≈1000. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the kinetic energy displays a perpendicular (horizontal) spectrum with power-law behavior compatible with ∼k(⊥)(-5/3), including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  8. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  9. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGESBeta

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  10. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  11. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    PubMed

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind.

  12. The combustion behavior of large scale lithium titanate battery.

    PubMed

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(Ni(x)Co(y)Mn(z))O2/Li(4)Ti(5)O(12) batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(Ni(x)Co(y)Mn(z))O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121 °C on anode tab and 139 to 147 °C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li(+) distribution are the main causes that lead to the difference. PMID:25586064

  13. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  14. Systematic renormalization of the effective theory of Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-05-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  15. The combustion behavior of large scale lithium titanate battery

    PubMed Central

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  16. Large scale reconstruction of the solar coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Amari, T.; Aly, J.-J.; Chopin, P.; Canou, A.; Mikic, Z.

    2014-10-01

    It is now becoming necessary to access the global magnetic structure of the solar low corona at a large scale in order to understand its physics and more particularly the conditions of energization of the magnetic fields and the multiple connections between distant active regions (ARs) which may trigger eruptive events in an almost coordinated way. Various vector magnetographs, either on board spacecraft or ground-based, currently allow to obtain vector synoptic maps, composite magnetograms made of multiple interactive ARs, and full disk magnetograms. We present a method recently developed for reconstructing the global solar coronal magnetic field as a nonlinear force-free magnetic field in spherical geometry, generalizing our previous results in Cartesian geometry. This method is implemented in the new code XTRAPOLS, which thus appears as an extension of our active region scale code XTRAPOL. We apply our method by performing a reconstruction at a specific time for which we dispose of a set of composite data constituted of a vector magnetogram provided by SDO/HMI, embedded in a larger full disk vector magnetogram provided by the same instrument, finally embedded in a synoptic map provided by SOLIS. It turns out to be possible to access the large scale structure of the corona and its energetic contents, and also the AR scale, at which we recover the presence of a twisted flux rope in equilibrium.

  17. THE LARGE-SCALE MAGNETIC FIELDS OF THIN ACCRETION DISKS

    SciTech Connect

    Cao Xinwu; Spruit, Hendrik C. E-mail: henk@mpa-garching.mpg.de

    2013-03-10

    Large-scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large-scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared with the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number P{sub m} is around unity. In this work, we revisit this problem considering the angular momentum of the disk to be removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-beta at the midplane of order several hundred, and one for strong accreted fields, {beta} {approx} 1. We surmise that the first is relevant for the accretion of weak, possibly external, fields through the outer parts of the disk, while the latter one could explain the tendency, observed in full three-dimensional numerical simulations, of strong flux bundles at the centers of disk to stay confined in spite of strong magnetororational instability turbulence surrounding them.

  18. Online education in a large scale rehabilitation institution.

    PubMed

    Mazzoleni, M Cristina; Rognoni, Carla; Pagani, Marco; Imbriani, Marcello

    2012-01-01

    Large scale multiple venue institutions face problems when delivering educations to their healthcare staff. The present study is aimed at evaluating the feasibility of relying on e-learning for at least part of the training of the Salvatore Maugeri Foundation healthcare staff. The paper reports the results of the delivery of e-learning courses to the personnel during a span of time of 7 months in order to assess the attitude to online courses attendance, the proportion between administered online education and administered traditional education, the economic sustainability of the online education delivery process. 37% of the total healthcare staff have attended online courses and 46% of nurses have proved to be the very active. The ratio between total number of credits and total number of courses for online and traditional education are respectively 18268/5 and 20354/96. These results point out that eLearning is not at all a niche tool used (or usable) by a limited number of people. Economic sustainability, assessed via personnel work hour saving, has been demonstrated. When distance learning is appropriate, online education is an effective, sustainable, well accepted mean to support and promote healthcare staff's education in a large scale institution. PMID:22491113

  19. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  20. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia