Science.gov

Sample records for improved large-scale proteomics

  1. Large-scale characterization of the murine cardiac proteome.

    PubMed

    Cosme, Jake; Emili, Andrew; Gramolini, Anthony O

    2013-01-01

    Cardiomyopathies are diseases of the heart that result in impaired cardiac muscle function. This dysfunction can progress to an inability to supply blood to the body. Cardiovascular diseases play a large role in overall global morbidity. Investigating the protein changes in the heart during disease can uncover pathophysiological mechanisms and potential therapeutic targets. Establishing a global protein expression "footprint" can facilitate more targeted studies of diseases of the heart.In the technical review presented here, we present methods to elucidate the heart's proteome through subfractionation of the cellular compartments to reduce sample complexity and improve detection of lower abundant proteins during multidimensional protein identification technology analysis. Analysis of the cytosolic, microsomal, and mitochondrial subproteomes separately in order to characterize the murine cardiac proteome is advantageous by simplifying complex cardiac protein mixtures. In combination with bioinformatic analysis and genome correlation, large-scale protein changes can be identified at the cellular compartment level in this animal model.

  2. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  3. Analyzing large-scale proteomics projects with latent semantic indexing.

    PubMed

    Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning

    2008-01-01

    Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.

  4. Improving Recent Large-Scale Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Cardoso, Rogerio Fernando; Ransom, S.

    2011-01-01

    Pulsars are unique in that they act as celestial laboratories for precise tests of gravity and other extreme physics (Kramer 2004). There are approximately 2000 known pulsars today, which is less than ten percent of pulsars in the Milky Way according to theoretical models (Lorimer 2004). Out of these 2000 known pulsars, approximately ten percent are known millisecond pulsars, objects used for their period stability for detailed physics tests and searches for gravitational radiation (Lorimer 2008). As the field and instrumentation progress, pulsar astronomers attempt to overcome observational biases and detect new pulsars, consequently discovering new millisecond pulsars. We attempt to improve large scale pulsar surveys by examining three recent pulsar surveys. The first, the Green Bank Telescope 350MHz Drift Scan, a low frequency isotropic survey of the northern sky, has yielded a large number of candidates that were visually inspected and identified, resulting in over 34.000 thousands candidates viewed, dozens of detections of known pulsars, and the discovery of a new low-flux pulsar, PSRJ1911+22. The second, the PALFA survey, is a high frequency survey of the galactic plane with the Arecibo telescope. We created a processing pipeline for the PALFA survey at the National Radio Astronomy Observatory in Charlottesville- VA, in addition to making needed modifications upon advice from the PALFA consortium. The third survey examined is a new GBT 820MHz survey devoted to find new millisecond pulsars by observing the target-rich environment of unidentified sources in the FERMI LAT catalogue. By approaching these three pulsar surveys at different stages, we seek to improve the success rates of large scale surveys, and hence the possibility for ground-breaking work in both basic physics and astrophysics.

  5. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  6. Assembling proteomics data as a prerequisite for the analysis of large scale experiments.

    PubMed

    Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleissner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R

    2009-01-23

    Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository

  7. Large-Scale Proteomic Analysis of the Human Spliceosome

    PubMed Central

    Rappsilber, Juri; Ryder, Ursula; Lamond, Angus I.; Mann, Matthias

    2002-01-01

    In a previous proteomic study of the human spliceosome, we identified 42 spliceosome-associated factors, including 19 novel ones. Using enhanced mass spectrometric tools and improved databases, we now report identification of 311 proteins that copurify with splicing complexes assembled on two separate pre-mRNAs. All known essential human splicing factors were found, and 96 novel proteins were identified, of which 55 contain domains directly linking them to functions in splicing/RNA processing. We also detected 20 proteins related to transcription, which indicates a direct connection between this process and splicing. This investigation provides the most detailed inventory of human spliceosome-associated factors to date, and the data indicate a number of interesting links coordinating splicing with other steps in the gene expression pathway. PMID:12176931

  8. Trans-Proteomic Pipeline, a standardized data processing pipeline for large-scale reproducible proteomics informatics

    PubMed Central

    Deutsch, Eric W.; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L.

    2015-01-01

    Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include mass spectrometry to define protein sequence, protein:protein interactions, and protein post-translational modifications. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative mass spectrometry proteomics. It supports all major operating systems and instrument vendors via open data formats. Here we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of tandem mass spectrometry datasets, as well as some major upcoming features. PMID:25631240

  9. Gas-phase purification enables accurate, large-scale, multiplexed proteome quantification with isobaric tagging

    PubMed Central

    Wenger, Craig D; Lee, M Violet; Hebert, Alexander S; McAlister, Graeme C; Phanstiel, Douglas H; Westphall, Michael S; Coon, Joshua J

    2011-01-01

    We describe a mass spectrometry method, QuantMode, which improves the accuracy of isobaric tag–based quantification by alleviating the pervasive problem of precursor interference—co-isolation of impurities—through gas-phase purification. QuantMode analysis of a yeast sample ‘contaminated’ with interfering human peptides showed substantially improved quantitative accuracy compared to a standard scan, with a small loss of spectral identifications. This technique will allow large-scale, multiplexed quantitative proteomics analyses using isobaric tagging. PMID:21963608

  10. Large-scale proteomic analysis of membrane proteins

    SciTech Connect

    Ahram, Mamoun; Springer, David L.

    2004-10-01

    Proteomic analysis of membrane proteins is promising in identification of novel candidates as drug targets and/or disease biomarkers. Despite notable technological developments, obstacles related to extraction and solubilization of membrane proteins are frequently encountered. A critical discussion of the different preparative methods of membrane proteins is offered in relation to downstream proteomic applications, mainly gel-based analyses and mass spectrometry. Unknown proteins are often identified by high-throughput profiling of membrane proteins. In search for novel membrane proteins, analysis of protein sequences using computational tools is performed to predict for the presence of transmembrane domains. Here, we also present these bioinformatic tools with the human proteome as a case study. Along with technological innovations, advancements in the areas of sample preparation and computational prediction of membrane proteins will lead to exciting discoveries.

  11. The Revolution and Evolution of Shotgun Proteomics for Large-Scale Proteome Analysis

    PubMed Central

    Yates, John R.

    2013-01-01

    Mass spectrometry has evolved at an exponential rate over the last 100 years. Innovations in the development of mass spectrometers have created powerful instruments capable of analyzing a wide range of targets, from rare atoms and molecules to very large molecules such as a proteins, protein complexes and DNA. These performance gains have been driven by sustaining innovations, punctuated by the occasional disruptive innovation. The use of mass spectrometry for proteome analysis was driven by disruptive innovations that created a capability for large-scale analysis of proteins and modifications. PMID:23294060

  12. Large-Scale Proteomics and Phosphoproteomics of Urinary Exosomes

    PubMed Central

    Gonzales, Patricia A.; Pisitkun, Trairak; Hoffert, Jason D.; Tchapyjnikov, Dmitry; Star, Robert A.; Kleta, Robert; Wang, Nam Sun; Knepper, Mark A.

    2009-01-01

    Normal human urine contains large numbers of exosomes, which are 40- to 100-nm vesicles that originate as the internal vesicles in multivesicular bodies from every renal epithelial cell type facing the urinary space. Here, we used LC-MS/MS to profile the proteome of human urinary exosomes. Overall, the analysis identified 1132 proteins unambiguously, including 177 that are represented on the Online Mendelian Inheritance in Man database of disease-related genes, suggesting that exosome analysis is a potential approach to discover urinary biomarkers. We extended the proteomic analysis to phosphoproteomic profiling using neutral loss scanning, and this yielded multiple novel phosphorylation sites, including serine-811 in the thiazide-sensitive Na-Cl co-transporter, NCC. To demonstrate the potential use of exosome analysis to identify a genetic renal disease, we carried out immunoblotting of exosomes from urine samples of patients with a clinical diagnosis of Bartter syndrome type I, showing an absence of the sodium-potassium-chloride co-transporter 2, NKCC2. The proteomic data are publicly accessible at http://dir.nhlbi.nih.gov/papers/lkem/exosome/. PMID:19056867

  13. Large-scale proteomics and phosphoproteomics of urinary exosomes.

    PubMed

    Gonzales, Patricia A; Pisitkun, Trairak; Hoffert, Jason D; Tchapyjnikov, Dmitry; Star, Robert A; Kleta, Robert; Wang, Nam Sun; Knepper, Mark A

    2009-02-01

    Normal human urine contains large numbers of exosomes, which are 40- to 100-nm vesicles that originate as the internal vesicles in multivesicular bodies from every renal epithelial cell type facing the urinary space. Here, we used LC-MS/MS to profile the proteome of human urinary exosomes. Overall, the analysis identified 1132 proteins unambiguously, including 177 that are represented on the Online Mendelian Inheritance in Man database of disease-related genes, suggesting that exosome analysis is a potential approach to discover urinary biomarkers. We extended the proteomic analysis to phosphoproteomic profiling using neutral loss scanning, and this yielded multiple novel phosphorylation sites, including serine-811 in the thiazide-sensitive Na-Cl co-transporter, NCC. To demonstrate the potential use of exosome analysis to identify a genetic renal disease, we carried out immunoblotting of exosomes from urine samples of patients with a clinical diagnosis of Bartter syndrome type I, showing an absence of the sodium-potassium-chloride co-transporter 2, NKCC2. The proteomic data are publicly accessible at http://dir.nhlbi.nih.gov/papers/lkem/exosome/.

  14. Assembling proteomics data as a prerequisite for the analysis of large scale experiments

    PubMed Central

    Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R

    2009-01-01

    Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed

  15. Learning networks for sustainable, large-scale improvement.

    PubMed

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  16. Large-scale Top-down Proteomics of the Human Proteome: Membrane Proteins, Mitochondria, and Senescence*

    PubMed Central

    Catherman, Adam D.; Durbin, Kenneth R.; Ahlf, Dorothy R.; Early, Bryan P.; Fellers, Ryan T.; Tran, John C.; Thomas, Paul M.; Kelleher, Neil L.

    2013-01-01

    Top-down proteomics is emerging as a viable method for the routine identification of hundreds to thousands of proteins. In this work we report the largest top-down study to date, with the identification of 1,220 proteins from the transformed human cell line H1299 at a false discovery rate of 1%. Multiple separation strategies were utilized, including the focused isolation of mitochondria, resulting in significantly improved proteome coverage relative to previous work. In all, 347 mitochondrial proteins were identified, including ∼50% of the mitochondrial proteome below 30 kDa and over 75% of the subunits constituting the large complexes of oxidative phosphorylation. Three hundred of the identified proteins were found to be integral membrane proteins containing between 1 and 12 transmembrane helices, requiring no specific enrichment or modified LC-MS parameters. Over 5,000 proteoforms were observed, many harboring post-translational modifications, including over a dozen proteins containing lipid anchors (some previously unknown) and many others with phosphorylation and methylation modifications. Comparison between untreated and senescent H1299 cells revealed several changes to the proteome, including the hyperphosphorylation of HMGA2. This work illustrates the burgeoning ability of top-down proteomics to characterize large numbers of intact proteoforms in a high-throughput fashion. PMID:24023390

  17. PROTEOME-3D: An Interactive Bioinformatics Tool for Large-Scale Data Exploration and Knowledge Discovery*

    PubMed Central

    Lundgren, Deborah H.; Eng, Jimmy; Wright, Michael E.; Han, David K.

    2006-01-01

    Comprehensive understanding of biological systems requires efficient and systematic assimilation of high-throughput datasets in the context of the existing knowledge base. A major limitation in the field of proteomics is the lack of an appropriate software platform that can synthesize a large number of experimental datasets in the context of the existing knowledge base. Here, we describe a software platform, termed PROTEOME-3D, that utilizes three essential features for systematic analysis of proteomics data: creation of a scalable, queryable, customized database for identified proteins from published literature; graphical tools for displaying proteome landscapes and trends from multiple large-scale experiments; and interactive data analysis that facilitates identification of crucial networks and pathways. Thus, PROTEOME-3D offers a standardized platform to analyze high-throughput experimental datasets for the identification of crucial players in co-regulated pathways and cellular processes. PMID:12960178

  18. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  19. Proteomics studies confirm the presence of alternative protein isoforms on a large scale

    PubMed Central

    Tress, Michael L; Bodenmiller, Bernd; Aebersold, Ruedi; Valencia, Alfonso

    2008-01-01

    Background Alternative splicing of messenger RNA permits the formation of a wide range of mature RNA transcripts and has the potential to generate a diverse spectrum of functional proteins. Although there is extensive evidence for large scale alternative splicing at the transcript level, there have been no comparable studies demonstrating the existence of alternatively spliced protein isoforms. Results Recent advances in proteomics technology have allowed us to carry out a comprehensive identification of protein isoforms in Drosophila. The analysis of this proteomic data confirmed the presence of multiple alternative gene products for over a hundred Drosophila genes. Conclusions We demonstrate that proteomics techniques can detect the expression of stable alternative splice isoforms on a genome-wide scale. Many of these alternative isoforms are likely to have regions that are disordered in solution, and specific proteomics methodologies may be required to identify these peptides. PMID:19017398

  20. Expediting SRM assay development for large-scale targeted proteomics experiments

    SciTech Connect

    Wu, Chaochao; Shi, Tujin; Brown, Joseph N.; He, Jintang; Gao, Yuqian; Fillmore, Thomas L.; Shukla, Anil K.; Moore, Ronald J.; Camp, David G.; Rodland, Karin D.; Qian, Weijun; Liu, Tao; Smith, Richard D.

    2014-08-22

    Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection of top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.

  1. Expediting SRM Assay Development for Large-Scale Targeted Proteomics Experiments

    PubMed Central

    2015-01-01

    Because of its high sensitivity and specificity, selected reaction monitoring (SRM)-based targeted proteomics has become increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to that of CID in triple quadrupole (QQQ) instrumentation and that by selection of the top 6 y fragment ions from HCD spectra, >86% of the top transitions optimized from direct infusion with QQQ instrumentation are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for 3+ precursors and that a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrated the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transition selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality. PMID:25145539

  2. Expediting SRM assay development for large-scale targeted proteomics experiments

    DOE PAGES

    Wu, Chaochao; Shi, Tujin; Brown, Joseph N.; ...

    2014-08-22

    Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection ofmore » top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.« less

  3. Determination of burn patient outcome by large-scale quantitative discovery proteomics

    PubMed Central

    Finnerty, Celeste C.; Jeschke, Marc G.; Qian, Wei-Jun; Kaushal, Amit; Xiao, Wenzhong; Liu, Tao; Gritsenko, Marina A.; Moore, Ronald J.; Camp, David G.; Moldawer, Lyle L.; Elson, Constance; Schoenfeld, David; Gamelli, Richard; Gibran, Nicole; Klein, Matthew; Arnoldo, Brett; Remick, Daniel; Smith, Richard D.; Davis, Ronald; Tompkins, Ronald G.; Herndon, David N.

    2013-01-01

    Objective Emerging proteomics techniques can be used to establish proteomic outcome signatures and to identify candidate biomarkers for survival following traumatic injury. We applied high-resolution liquid chromatography-mass spectrometry (LC-MS) and multiplex cytokine analysis to profile the plasma proteome of survivors and non-survivors of massive burn injury to determine the proteomic survival signature following a major burn injury. Design Proteomic discovery study. Setting Five burn hospitals across the U.S. Patients Thirty-two burn patients (16 non-survivors and 16 survivors), 19–89 years of age, were admitted within 96 h of injury to the participating hospitals with burns covering >20% of the total body surface area and required at least one surgical intervention. Interventions None. Measurements and Main Results We found differences in circulating levels of 43 proteins involved in the acute phase response, hepatic signaling, the complement cascade, inflammation, and insulin resistance. Thirty-two of the proteins identified were not previously known to play a role in the response to burn. IL-4, IL-8, GM-CSF, MCP-1, and β2-microglobulin correlated well with survival and may serve as clinical biomarkers. Conclusions These results demonstrate the utility of these techniques for establishing proteomic survival signatures and for use as a discovery tool to identify candidate biomarkers for survival. This is the first clinical application of a high-throughput, large-scale LC-MS-based quantitative plasma proteomic approach for biomarker discovery for the prediction of patient outcome following burn, trauma or critical illness. PMID:23507713

  4. Mechanism of Arachidonic Acid Accumulation during Aging in Mortierella alpina: A Large-Scale Label-Free Comparative Proteomics Study.

    PubMed

    Yu, Yadong; Li, Tao; Wu, Na; Ren, Lujing; Jiang, Ling; Ji, Xiaojun; Huang, He

    2016-11-30

    Arachidonic acid (ARA) is an important polyunsaturated fatty acid having various beneficial physiological effects on the human body. The aging of Mortierella alpina has long been known to significantly improve ARA yield, but the exact mechanism is still elusive. Herein, multiple approaches including large-scale label-free comparative proteomics were employed to systematically investigate the mechanism mentioned above. Upon ultrastructural observation, abnormal mitochondria were found to aggregate around shrunken lipid droplets. Proteomics analysis revealed a total of 171 proteins with significant alterations of expression during aging. Pathway analysis suggested that reactive oxygen species (ROS) were accumulated and stimulated the activation of the malate/pyruvate cycle and isocitrate dehydrogenase, which might provide additional NADPH for ARA synthesis. EC 4.2.1.17-hydratase might be a key player in ARA accumulation during aging. These findings provide a valuable resource for efforts to further improve the ARA content in the oil produced by aging M. alpina.

  5. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  6. Emerging Affinity-Based Proteomic Technologies for Large-Scale Plasma Profiling in Cardiovascular Disease.

    PubMed

    Smith, J Gustav; Gerszten, Robert E

    2017-04-25

    Plasma biomarkers that reflect molecular states of the cardiovascular system are central for clinical decision making. Routinely used plasma biomarkers include troponins, natriuretic peptides, and lipoprotein particles, yet interrogate only a modest subset of pathways relevant to cardiovascular disease. Systematic profiling of a larger portion of circulating plasma proteins (the plasma proteome) will provide opportunities for unbiased discovery of novel markers to improve diagnostic or predictive accuracy. In addition, proteomic profiling may inform pathophysiological understanding and point to novel therapeutic targets. Obstacles for comprehensive proteomic profiling include the immense size and structural heterogeneity of the proteome, and the broad range of abundance levels, as well. Proteome-wide, untargeted profiling can be performed in tissues and cells with tandem mass spectrometry. However, applications to plasma are limited by the need for complex preanalytical sample preparation stages limiting sample throughput. Multiplexing of targeted methods based on capture and detection of specific proteins are therefore receiving increasing attention in plasma proteomics. Immunoaffinity assays are the workhorse for measuring individual proteins but have been limited for proteomic applications by long development times, cross-reactivity preventing multiplexing, specificity issues, and incomplete sensitivity to detect proteins in the lower range of the abundance spectrum (below picograms per milliliter). Emerging technologies to address these issues include nucleotide-labeled immunoassays and aptamer reagents that can be automated for efficient multiplexing of thousands of proteins at high sample throughput, coupling of affinity capture methods to mass spectrometry for improved specificity, and ultrasensitive detection systems to measure low-abundance proteins. In addition, proteomics can now be integrated with modern genomics tools to comprehensively relate

  7. Large scale systematic proteomic quantification from non-metastatic to metastatic colorectal cancer

    NASA Astrophysics Data System (ADS)

    Yin, Xuefei; Zhang, Yang; Guo, Shaowen; Jin, Hong; Wang, Wenhai; Yang, Pengyuan

    2015-07-01

    A systematic proteomic quantification of formalin-fixed, paraffin-embedded (FFPE) colorectal cancer tissues from stage I to stage IIIC was performed in large scale. 1017 proteins were identified with 338 proteins in quantitative changes by label free method, while 341 proteins were quantified with significant expression changes among 6294 proteins by iTRAQ method. We found that proteins related to migration expression increased and those for binding and adherent decreased during the colorectal cancer development according to the gene ontology (GO) annotation and ingenuity pathway analysis (IPA). The integrin alpha 5 (ITA5) in integrin family was focused, which was consistent with the metastasis related pathway. The expression level of ITA5 decreased in metastasis tissues and the result has been further verified by Western blotting. Another two cell migration related proteins vitronectin (VTN) and actin-related protein (ARP3) were also proved to be up-regulated by both mass spectrometry (MS) based quantification results and Western blotting. Up to now, our result shows one of the largest dataset in colorectal cancer proteomics research. Our strategy reveals a disease driven omics-pattern for the metastasis colorectal cancer.

  8. Large-Scale Label-Free Quantitative Proteomics of the Pea aphid-Buchnera Symbiosis*

    PubMed Central

    Poliakov, Anton; Russell, Calum W.; Ponnala, Lalit; Hoops, Harold J.; Sun, Qi; Douglas, Angela E.; van Wijk, Klaas J.

    2011-01-01

    Many insects are nutritionally dependent on symbiotic microorganisms that have tiny genomes and are housed in specialized host cells called bacteriocytes. The obligate symbiosis between the pea aphid Acyrthosiphon pisum and the γ-proteobacterium Buchnera aphidicola (only 584 predicted proteins) is particularly amenable for molecular analysis because the genomes of both partners have been sequenced. To better define the symbiotic relationship between this aphid and Buchnera, we used large-scale, high accuracy tandem mass spectrometry (nanoLC-LTQ-Orbtrap) to identify aphid and Buchnera proteins in the whole aphid body, purified bacteriocytes, isolated Buchnera cells and the residual bacteriocyte fraction. More than 1900 aphid and 400 Buchnera proteins were identified. All enzymes in amino acid metabolism annotated in the Buchnera genome were detected, reflecting the high (68%) coverage of the proteome and supporting the core function of Buchnera in the aphid symbiosis. Transporters mediating the transport of predicted metabolites were present in the bacteriocyte. Label-free spectral counting combined with hierarchical clustering, allowed to define the quantitative distribution of a subset of these proteins across both symbiotic partners, yielding no evidence for the selective transfer of protein among the partners in either direction. This is the first quantitative proteome analysis of bacteriocyte symbiosis, providing a wealth of information about molecular function of both the host cell and bacterial symbiont. PMID:21421797

  9. Large-Scale Proteome Comparative Analysis of Developing Rhizomes of the Ancient Vascular Plant Equisetum Hyemale

    PubMed Central

    Balbuena, Tiago Santana; He, Ruifeng; Salvato, Fernanda; Gang, David R.; Thelen, Jay J.

    2012-01-01

    Horsetail (Equisetum hyemale) is a widespread vascular plant species, whose reproduction is mainly dependent on the growth and development of the rhizomes. Due to its key evolutionary position, the identification of factors that could be involved in the existence of the rhizomatous trait may contribute to a better understanding of the role of this underground organ for the successful propagation of this and other plant species. In the present work, we characterized the proteome of E. hyemale rhizomes using a GeLC-MS spectral-counting proteomics strategy. A total of 1,911 and 1,860 non-redundant proteins were identified in the rhizomes apical tip and elongation zone, respectively. Rhizome-characteristic proteins were determined by comparisons of the developing rhizome tissues to developing roots. A total of 87 proteins were found to be up-regulated in both horsetail rhizome tissues in relation to developing roots. Hierarchical clustering indicated a vast dynamic range in the regulation of the 87 characteristic proteins and revealed, based on the regulation profile, the existence of nine major protein groups. Gene ontology analyses suggested an over-representation of the terms involved in macromolecular and protein biosynthetic processes, gene expression, and nucleotide and protein binding functions. Spatial difference analysis between the rhizome apical tip and the elongation zone revealed that only eight proteins were up-regulated in the apical tip including RNA-binding proteins and an acyl carrier protein, as well as a KH domain protein and a T-complex subunit; while only seven proteins were up-regulated in the elongation zone including phosphomannomutase, galactomannan galactosyltransferase, endoglucanase 10 and 25, and mannose-1-phosphate guanyltransferase subunits alpha and beta. This is the first large-scale characterization of the proteome of a plant rhizome. Implications of the findings were discussed in relation to other underground organs and related

  10. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  11. hEIDI: An Intuitive Application Tool To Organize and Treat Large-Scale Proteomics Data.

    PubMed

    Hesse, Anne-Marie; Dupierris, Véronique; Adam, Claire; Court, Magali; Barthe, Damien; Emadali, Anouk; Masselon, Christophe; Ferro, Myriam; Bruley, Christophe

    2016-10-07

    Advances in high-throughput proteomics have led to a rapid increase in the number, size, and complexity of the associated data sets. Managing and extracting reliable information from such large series of data sets require the use of dedicated software organized in a consistent pipeline to reduce, validate, exploit, and ultimately export data. The compilation of multiple mass-spectrometry-based identification and quantification results obtained in the context of a large-scale project represents a real challenge for developers of bioinformatics solutions. In response to this challenge, we developed a dedicated software suite called hEIDI to manage and combine both identifications and semiquantitative data related to multiple LC-MS/MS analyses. This paper describes how, through a user-friendly interface, hEIDI can be used to compile analyses and retrieve lists of nonredundant protein groups. Moreover, hEIDI allows direct comparison of series of analyses, on the basis of protein groups, while ensuring consistent protein inference and also computing spectral counts. hEIDI ensures that validated results are compliant with MIAPE guidelines as all information related to samples and results is stored in appropriate databases. Thanks to the database structure, validated results generated within hEIDI can be easily exported in the PRIDE XML format for subsequent publication. hEIDI can be downloaded from http://biodev.extra.cea.fr/docs/heidi .

  12. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS.

    PubMed

    Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D

    2016-01-01

    Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.

  13. Discovery of O-GlcNAc-modified Proteins in Published Large-scale Proteome Data*

    PubMed Central

    Hahne, Hannes; Gholami, Amin Moghaddas; Kuster, Bernhard

    2012-01-01

    The attachment of N-acetylglucosamine to serine or threonine residues (O-GlcNAc) is a post-translational modification on nuclear and cytoplasmic proteins with emerging roles in numerous cellular processes, such as signal transduction, transcription, and translation. It is further presumed that O-GlcNAc can exhibit a site-specific, dynamic and possibly functional interplay with phosphorylation. O-GlcNAc proteins are commonly identified by tandem mass spectrometry following some form of biochemical enrichment. In the present study, we assessed if, and to which extent, O-GlcNAc-modified proteins can be discovered from existing large-scale proteome data sets. To this end, we conceived a straightforward O-GlcNAc identification strategy based on our recently developed Oscore software that automatically analyzes tandem mass spectra for the presence and intensity of O-GlcNAc diagnostic fragment ions. Using the Oscore, we discovered hundreds of O-GlcNAc peptides not initially identified in these studies, and most of which have not been described before. Merely re-searching this data extended the number of known O-GlcNAc proteins by almost 100 suggesting that this modification exists even more widely than previously anticipated and the modification is often sufficiently abundant to be detected without enrichment. However, a comparison of O-GlcNAc and phospho-identifications from the very same data indicates that the O-GlcNAc modification is considerably less abundant than phosphorylation. The discovery of numerous doubly modified peptides (i.e. peptides with one or multiple O-GlcNAc or phosphate moieties), suggests that O-GlcNAc and phosphorylation are not necessarily mutually exclusive, but can occur simultaneously at adjacent sites. PMID:22661428

  14. Cooperativity within proximal phosphorylation sites is revealed from large-scale proteomics data

    PubMed Central

    2010-01-01

    Background Phosphorylation is the most prevalent post-translational modification on eukaryotic proteins. Multisite phosphorylation enables a specific combination of phosphosites to determine the speed, specificity and duration of biological response. Until recent years, the lack of high quality data limited the possibility for analyzing the properties of phosphorylation at the proteome scale and in the context of a wide range of conditions. Thanks to advances of mass spectrometry technologies, thousands of phosphosites from in-vivo experiments were identified and archived in the public domain. Such resource is appropriate to derive an unbiased view on the phosphosites properties in eukaryotes and on their functional relevance. Results We present statistically rigorous tests on the spatial and functional properties of a collection of ~70,000 reported phosphosites. We show that the distribution of phosphosites positioning along the protein tends to occur as dense clusters of Serine/Threonines (pS/pT) and between Serine/Threonines and Tyrosines, but generally not as much between Tyrosines (pY) only. This phenomenon is more ubiquitous than anticipated and is pertinent for most eukaryotic proteins: for proteins with ≥ 2 phosphosites, 54% of all pS/pT sites are within 4 amino acids of another site. We found a strong tendency for clustered pS/pT to be activated by the same kinase. Large-scale analyses of phosphopeptides are thus consistent with a cooperative function within the cluster. Conclusions We present evidence supporting the notion that clusters of pS/pT but generally not pY should be considered as the elementary building blocks in phosphorylation regulation. Indeed, closely positioned sites tend to be activated by the same kinase, a signal that overrides the tendency of a protein to be activated by a single or only few kinases. Within these clusters, coordination and positional dependency is evident. We postulate that cellular regulation takes advantage of such

  15. Colloquium on Large Scale Improvement: Implications for AISI

    ERIC Educational Resources Information Center

    McEwen, Nelly, Ed.

    2008-01-01

    The Alberta Initiative for School Improvement (AISI) is a province-wide partnership program whose goal is to improve student learning and performance by fostering initiatives that reflect the unique needs and circumstances of each school authority. It is currently ending its third cycle and ninth year of implementation. "The Colloquium on…

  16. Large-Scale Targeted Proteomics Using Internal Standard Triggered-Parallel Reaction Monitoring (IS-PRM).

    PubMed

    Gallien, Sebastien; Kim, Sang Yoon; Domon, Bruno

    2015-06-01

    Targeted high-resolution and accurate mass analyses performed on fast sequencing mass spectrometers have opened new avenues for quantitative proteomics. More specifically, parallel reaction monitoring (PRM) implemented on quadrupole-orbitrap instruments exhibits exquisite selectivity to discriminate interferences from analytes. Furthermore, the instrument trapping capability enhances the sensitivity of the measurements. The PRM technique, applied to the analysis of limited peptide sets (typically 50 peptides or less) in a complex matrix, resulted in an improved detection and quantification performance as compared with the reference method of selected reaction monitoring performed on triple quadrupole instruments. However, the implementation of PRM for the analysis of large peptide numbers requires the adjustment of mass spectrometry acquisition parameters, which affects dramatically the quality of the generated data, and thus the overall output of an experiment. A newly designed data acquisition scheme enabled the analysis of moderate-to-large peptide numbers while retaining a high performance level. This new method, called internal standard triggered-parallel reaction monitoring (IS-PRM), relies on added internal standards and the on-the-fly adjustment of acquisition parameters to drive in real-time measurement of endogenous peptides. The acquisition time management was designed to maximize the effective time devoted to measure the analytes in a time-scheduled targeted experiment. The data acquisition scheme alternates between two PRM modes: a fast low-resolution "watch mode" and a "quantitative mode" using optimized parameters ensuring data quality. The IS-PRM method exhibited a highly effective use of the instrument time. Applied to the analysis of large peptide sets (up to 600) in complex samples, the method showed an unprecedented combination of scale and analytical performance, with limits of quantification in the low amol range. The successful analysis of

  17. Large-Scale Targeted Proteomics Using Internal Standard Triggered-Parallel Reaction Monitoring (IS-PRM)*

    PubMed Central

    Gallien, Sebastien; Kim, Sang Yoon; Domon, Bruno

    2015-01-01

    Targeted high-resolution and accurate mass analyses performed on fast sequencing mass spectrometers have opened new avenues for quantitative proteomics. More specifically, parallel reaction monitoring (PRM) implemented on quadrupole-orbitrap instruments exhibits exquisite selectivity to discriminate interferences from analytes. Furthermore, the instrument trapping capability enhances the sensitivity of the measurements. The PRM technique, applied to the analysis of limited peptide sets (typically 50 peptides or less) in a complex matrix, resulted in an improved detection and quantification performance as compared with the reference method of selected reaction monitoring performed on triple quadrupole instruments. However, the implementation of PRM for the analysis of large peptide numbers requires the adjustment of mass spectrometry acquisition parameters, which affects dramatically the quality of the generated data, and thus the overall output of an experiment. A newly designed data acquisition scheme enabled the analysis of moderate-to-large peptide numbers while retaining a high performance level. This new method, called internal standard triggered-parallel reaction monitoring (IS-PRM), relies on added internal standards and the on-the-fly adjustment of acquisition parameters to drive in real-time measurement of endogenous peptides. The acquisition time management was designed to maximize the effective time devoted to measure the analytes in a time-scheduled targeted experiment. The data acquisition scheme alternates between two PRM modes: a fast low-resolution “watch mode” and a “quantitative mode” using optimized parameters ensuring data quality. The IS-PRM method exhibited a highly effective use of the instrument time. Applied to the analysis of large peptide sets (up to 600) in complex samples, the method showed an unprecedented combination of scale and analytical performance, with limits of quantification in the low amol range. The successful

  18. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS

    SciTech Connect

    Gritsenko, Marina A.; Xu, Zhe; Liu, Tao; Smith, Richard D.

    2016-02-12

    Comprehensive, quantitative information on abundances of proteins and their post-translational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labelling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples, and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.

  19. A Large-Scale Quantitative Proteomic Approach To Identifying Sulfur Mustard-Induced Protein Phosphorylation Cascades

    DTIC Science & Technology

    2009-07-31

    are no effective treatments for SM-induced injury, current research focuses on understanding the molecular changes upon SM exposure. Indeed, efforts...with immobilized metal affinity chromatography to study the large-scale protein phosphorylation changes resulting from SM exposure in a human... effective at probing individual pathways, they do not put into context the global changes that are occurring in response to SM and how these many

  20. From Peptidome to PRIDE: Public proteomics data migration at a large scale

    PubMed Central

    Csordas, Attila; Wang, Rui; Ríos, Daniel; Reisinger, Florian; Foster, Joseph M; Slotta, Douglas J; Vizcaíno, Juan Antonio; Hermjakob, Henning

    2013-01-01

    The PRIDE database, developed and maintained at the European Bioinformatics Institute (EBI), is one of the most prominent data repositories dedicated to high throughput MS-based proteomics data. Peptidome, developed by the National Center for Biotechnology Information (NCBI) as a sibling resource to PRIDE, was discontinued due to funding constraints in April 2011. A joint effort between the two teams was started soon after the Peptidome closure to ensure that data were not “lost” to the wider proteomics community by exporting it to PRIDE. As a result, data in the low terabyte range have been migrated from Peptidome to PRIDE and made publicly available under experiment accessions 17 900–18 271, representing 54 projects, ∼53 million mass spectra, ∼10 million peptide identifications, ∼650 000 protein identifications, ∼1.1 million biologically relevant protein modifications, and 28 species, from more than 30 different labs. PMID:23533138

  1. Aggregation prone regions in human proteome: Insights from large-scale data analyses.

    PubMed

    Prabakaran, R; Goel, Dhruv; Kumar, Sandeep; Gromiha, M Michael

    2017-06-01

    Protein aggregation leads to several burdensome human maladies, but a molecular level understanding of how human proteome has tackled the threat of aggregation is currently lacking. In this work, we survey the human proteome for incidence of aggregation prone regions (APRs), by using sequences of experimentally validated amyloid-fibril forming peptides and via computational predictions. While approximately 30 human proteins are currently known to be amyloidogenic, we found that 260 proteins (∼1% of human proteome) contain at least one experimentally validated amyloid-fibril forming segment. Computer predictions suggest that more than 80% of the human proteins contain at least one potential APR and approximately two-thirds (65%) contain two or more APRs; spanning 3-5% of their sequences. Sequence randomizations show that this apparently high incidence of APRs has been actually significantly reduced by unique amino acid composition and sequence patterning of human proteins. The human proteome has utilized a wide repertoire of sequence-structural optimization strategies, most of them already known, to minimize deleterious consequences due to the presence of APRs while simultaneously taking advantage of their order promoting properties. This survey also found that APRs tend to be located near the active and ligand binding sites in human proteins, but not near the post translational modification sites. The APRs in human proteins are also preferentially found at heterotypic interfaces rather than homotypic ones. Interestingly, this survey reveals that APRs play multiple, often opposing, roles in the human protein sequence-structure-function relationships. Insights gained from this work have several interesting implications towards novel drug discovery and development. Proteins 2017; 85:1099-1118. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. A large scale Plasmodium vivax- Saimiri boliviensis trophozoite-schizont transition proteome

    PubMed Central

    Lapp, Stacey A.; Barnwell, John W.; Galinski, Mary R.

    2017-01-01

    Plasmodium vivax is a complex protozoan parasite with over 6,500 genes and stage-specific differential expression. Much of the unique biology of this pathogen remains unknown, including how it modifies and restructures the host reticulocyte. Using a recently published P. vivax reference genome, we report the proteome from two biological replicates of infected Saimiri boliviensis host reticulocytes undergoing transition from the late trophozoite to early schizont stages. Using five database search engines, we identified a total of 2000 P. vivax and 3487 S. boliviensis proteins, making this the most comprehensive P. vivax proteome to date. PlasmoDB GO-term enrichment analysis of proteins identified at least twice by a search engine highlighted core metabolic processes and molecular functions such as glycolysis, translation and protein folding, cell components such as ribosomes, proteasomes and the Golgi apparatus, and a number of vesicle and trafficking related clusters. Database for Annotation, Visualization and Integrated Discovery (DAVID) v6.8 enriched functional annotation clusters of S. boliviensis proteins highlighted vesicle and trafficking-related clusters, elements of the cytoskeleton, oxidative processes and response to oxidative stress, macromolecular complexes such as the proteasome and ribosome, metabolism, translation, and cell death. Host and parasite proteins potentially involved in cell adhesion were also identified. Over 25% of the P. vivax proteins have no functional annotation; this group includes 45 VIR members of the large PIR family. A number of host and pathogen proteins contained highly oxidized or nitrated residues, extending prior trophozoite-enriched stage observations from S. boliviensis infections, and supporting the possibility of oxidative stress in relation to the disease. This proteome significantly expands the size and complexity of the known P. vivax and Saimiri host iRBC proteomes, and provides in-depth data that will be valuable

  3. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  4. Review of software tools for design and analysis of large scale MRM proteomic datasets.

    PubMed

    Colangelo, Christopher M; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi

    2013-06-15

    Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow.

  5. Review of Software Tools for Design and Analysis of Large scale MRM Proteomic Datasets

    PubMed Central

    Colangelo, Christopher M.; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi

    2013-01-01

    Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow. PMID:23702368

  6. Large-Scale High School Reform through School Improvement Networks: Exploring Possibilities for "Developmental Evaluation"

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.

    2016-01-01

    Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…

  7. Large-Scale High School Reform through School Improvement Networks: Exploring Possibilities for "Developmental Evaluation"

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.

    2016-01-01

    Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…

  8. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  9. Psychology in an Interdisciplinary Setting: A Large-Scale Project to Improve University Teaching

    ERIC Educational Resources Information Center

    Koch, Franziska D.; Vogt, Joachim

    2015-01-01

    At a German university of technology, a large-scale project was funded as a part of the "Quality Pact for Teaching", a programme launched by the German Federal Ministry of Education and Research to improve the quality of university teaching and study conditions. The project aims at intensifying interdisciplinary networking in teaching,…

  10. Psychology in an Interdisciplinary Setting: A Large-Scale Project to Improve University Teaching

    ERIC Educational Resources Information Center

    Koch, Franziska D.; Vogt, Joachim

    2015-01-01

    At a German university of technology, a large-scale project was funded as a part of the "Quality Pact for Teaching", a programme launched by the German Federal Ministry of Education and Research to improve the quality of university teaching and study conditions. The project aims at intensifying interdisciplinary networking in teaching,…

  11. Designing a large-scale multilevel improvement initiative: the improving performance in practice program.

    PubMed

    Margolis, Peter A; DeWalt, Darren A; Simon, Janet E; Horowitz, Sheldon; Scoville, Richard; Kahn, Norman; Perelman, Robert; Bagley, Bruce; Miles, Paul

    2010-01-01

    Improving Performance in Practice (IPIP) is a large system intervention designed to align efforts and motivate the creation of a tiered system of improvement at the national, state, practice, and patient levels, assisting primary-care physicians and their practice teams to assess and measurably improve the quality of care for chronic illness and preventive services using a common approach across specialties. The long-term goal of IPIP is to create an ongoing, sustained system across multiple levels of the health care system to accelerate improvement. IPIP core program components include alignment of leadership and leadership accountability, promotion of partnerships to promote health care quality, development of attractive incentives and motivators, regular measurement and transparent sharing of performance data, participation in organized quality improvement efforts using a standardized model, development of enduring collaborative improvement networks, and practice-level support. A prototype of the program was tested in 2 states from March 2006 to February 2008. In 2008, IPIP began to spread to 5 additional states. IPIP uses the leadership of the medical profession to align efforts to achieve large-scale change and to catalyze the development of an infrastructure capable of testing, evaluating, and disseminating effective approaches directly into practice.

  12. Large-scale multiplexed quantitative discovery proteomics enabled by the use of an (18)O-labeled "universal" reference sample.

    PubMed

    Qian, Wei-Jun; Liu, Tao; Petyuk, Vladislav A; Gritsenko, Marina A; Petritis, Brianne O; Polpitiya, Ashoka D; Kaushal, Amit; Xiao, Wenzhong; Finnerty, Celeste C; Jeschke, Marc G; Jaitly, Navdeep; Monroe, Matthew E; Moore, Ronald J; Moldawer, Lyle L; Davis, Ronald W; Tompkins, Ronald G; Herndon, David N; Camp, David G; Smith, Richard D

    2009-01-01

    The quantitative comparison of protein abundances across a large number of biological or patient samples represents an important proteomics challenge that needs to be addressed for proteomics discovery applications. Herein, we describe a strategy that incorporates a stable isotope (18)O-labeled "universal" reference sample as a comprehensive set of internal standards for analyzing large sample sets quantitatively. As a pooled sample, the (18)O-labeled "universal" reference sample is spiked into each individually processed unlabeled biological sample and the peptide/protein abundances are quantified based on (16)O/(18)O isotopic peptide pair abundance ratios that compare each unlabeled sample to the identical reference sample. This approach also allows for the direct application of label-free quantitation across the sample set simultaneously along with the labeling-approach (i.e., dual-quantitation) since each biological sample is unlabeled except for the labeled reference sample that is used as internal standards. The effectiveness of this approach for large-scale quantitative proteomics is demonstrated by its application to a set of 18 plasma samples from severe burn patients. When immunoaffinity depletion and cysteinyl-peptide enrichment-based fractionation with high resolution LC-MS measurements were combined, a total of 312 plasma proteins were confidently identified and quantified with a minimum of two unique peptides per protein. The isotope labeling data was directly compared with the label-free (16)O-MS intensity data extracted from the same data sets. The results showed that the (18)O reference-based labeling approach had significantly better quantitative precision compared to the label-free approach. The relative abundance differences determined by the two approaches also displayed strong correlation, illustrating the complementary nature of the two quantitative methods. The simplicity of including the (18)O-reference for accurate quantitation makes this

  13. Large-scale Analysis of Thermo-stable, Mammalian Proteins Provides Insights into the Intrinsically Disordered Proteome

    PubMed Central

    Galea, Charles A.; High, Anthony; Obenauer, John C.; Mishra, Ashutosh; Park, Cheon-Gil; Punta, Marco; Schlessinger, Avner; Ma, Jing; Rost, Burkhard; Slaughter, Clive A.; Kriwacki, Richard W.

    2009-01-01

    Intrinsically disordered proteins are predicted to be highly abundant and play broad biological roles in eukaryotic cells. In particular, by virtue of their structural malleability and propensity to interact with multiple binding partners, disordered proteins are thought to be specialized for roles in signaling and regulation. However, these concepts are based on in silico analyses of translated whole genome sequences, not on large-scale analyses of proteins expressed in living cells. Therefore, whether these concepts broadly apply to expressed proteins is currently unknown. Previous studies have shown that heat-treatment of cell extracts lead to partial enrichment of soluble, disordered proteins. Based on this observation, we sought to address the current dearth of knowledge about expressed, disordered proteins by performing a large-scale proteomics study of thermo-stable proteins isolated from mouse fibroblast cells. Using novel multidimensional chromatography methods and mass spectrometry, we identified a total of 1,320 thermo-stable proteins from these cells. Further, we used a variety of bioinformatics methods to analyze the structural and biological properties of these proteins. Interestingly, more than 900 of these expressed proteins were predicted to be substantially disordered. These were divided into two categories, with 514 predicted to be predominantly disordered and 395 predicted to exhibit both disordered and ordered/folded features. In addition, 411 of the thermo-stable proteins were predicted to be folded. Despite the use of heat treatment (60 min. at 98 °C) to partially enrich for disordered proteins, which might have been expected to select for small proteins, the sequences of these proteins exhibited a wide range of lengths (622 ± 555 residues (average length ± standard deviation) for disordered proteins and 569 ± 598 residues for folded proteins). Computational structural analyses revealed several unexpected features of the thermo

  14. Infrastructure for Large-Scale Quality-Improvement Projects: Early Lessons from North Carolina Improving Performance in Practice

    ERIC Educational Resources Information Center

    Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…

  15. Large-Scale, Ion-Current-Based Proteomic Investigation of the Rat Striatal Proteome in a Model of Short- and Long-Term Cocaine Withdrawal.

    PubMed

    Shen, Shichen; Jiang, Xiaosheng; Li, Jun; Straubinger, Robert M; Suarez, Mauricio; Tu, Chengjian; Duan, Xiaotao; Thompson, Alexis C; Qu, Jun

    2016-05-06

    Given the tremendous detriments of cocaine dependence, effective diagnosis and patient stratification are critical for successful intervention yet difficult to achieve due to the largely unknown molecular mechanisms involved. To obtain new insights into cocaine dependence and withdrawal, we employed a reproducible, reliable, and large-scale proteomics approach to investigate the striatal proteomes of rats (n = 40, 10 per group) subjected to chronic cocaine exposure, followed by either short- (WD1) or long- (WD22) term withdrawal. By implementing a surfactant-aided precipitation/on-pellet digestion procedure, a reproducible and sensitive nanoLC-Orbitrap MS analysis, and an optimized ion-current-based MS1 quantification pipeline, >2000 nonredundant proteins were quantified confidently without missing data in any replicate. Although cocaine was cleared from the body, 129/37 altered proteins were observed in WD1/WD22 that are implicated in several biological processes related closely to drug-induced neuroplasticity. Although many of these changes recapitulate the findings from independent studies reported over the last two decades, some novel insights were obtained and further validated by immunoassays. For example, significantly elevated striatal protein kinase C activity persisted over the 22 day cocaine withdrawal. Cofilin-1 activity was up-regulated in WD1 and down-regulated in WD22. These discoveries suggest potentially distinct structural plasticity after short- and long-term cocaine withdrawal. In addition, this study provides compelling evidence that blood vessel narrowing, a long-known effect of cocaine use, occurred after long-term but not short-term withdrawal. In summary, this work developed a well-optimized paradigm for ion-current-based quantitative proteomics in brain tissues and obtained novel insights into molecular alterations in the striatum following cocaine exposure and withdrawal.

  16. Improving operating policies of large-scale surface-groundwater systems through stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, H.; Tilmant, A.; Pulido-Velazquez, M.

    2017-02-01

    The management of large-scale water resource systems with surface and groundwater resources requires considering stream-aquifer interactions. Optimization models applied to large-scale systems have either employed deterministic optimization (with perfect foreknowledge of future inflows, which hinders their applicability to real-life operations) or stochastic programming (in which stream-aquifer interaction is often neglected due to the computational burden associated with these methods). In this paper, stream-aquifer interaction is integrated in a stochastic programming framework by combining the Stochastic Dual Dynamic Programming (SDDP) optimization algorithm with the Embedded Multireservoir Model (EMM). The resulting extension of the SDDP algorithm, named Combined Surface-Groundwater SDDP (CSG-SDDP), is able to properly represent the stream-aquifer interaction within stochastic optimization models of large-scale surface-groundwater resource systems. The algorithm is applied to build a hydroeconomic model for the Jucar River Basin (Spain), in which stream-aquifer interactions are essential to the characterization of water resources. Besides the uncertainties regarding the economic characterization of the demand functions, the results show that the economic efficiency of the operating policies under the current system can be improved by better management of groundwater and surface resources.

  17. Large-scale proteomic analysis of the grapevine leaf apoplastic fluid reveals mainly stress-related proteins and cell wall modifying enzymes

    PubMed Central

    2013-01-01

    Background The extracellular space or apoplast forms a path through the whole plant and acts as an interface with the environment. The apoplast is composed of plant cell wall and space within which apoplastic fluid provides a means of delivering molecules and facilitates intercellular communications. However, the apoplastic fluid extraction from in planta systems remains challenging and this is particularly true for grapevine (Vitis vinifera L.), a worldwide-cultivated fruit plant. Large-scale proteomic analysis reveals the protein content of the grapevine leaf apoplastic fluid and the free interactive proteome map considerably facilitates the study of the grapevine proteome. Results To obtain a snapshot of the grapevine apoplastic fluid proteome, a vacuum-infiltration-centrifugation method was optimized to collect the apoplastic fluid from non-challenged grapevine leaves. Soluble apoplastic protein patterns were then compared to whole leaf soluble protein profiles by 2D-PAGE analyses. Subsequent MALDI-TOF/TOF mass spectrometry of tryptically digested protein spots was used to identify proteins. This large-scale proteomic analysis established a well-defined proteomic map of whole leaf and leaf apoplastic soluble proteins, with 223 and 177 analyzed spots, respectively. All data arising from proteomic, MS and MS/MS analyses were deposited in the public database world-2DPAGE. Prediction tools revealed a high proportion of (i) classical secreted proteins but also of non-classical secreted proteins namely Leaderless Secreted Proteins (LSPs) in the apoplastic protein content and (ii) proteins potentially involved in stress reactions and/or in cell wall metabolism. Conclusions This approach provides free online interactive reference maps annotating a large number of soluble proteins of the whole leaf and the apoplastic fluid of grapevine leaf. To our knowledge, this is the first detailed proteome study of grapevine apoplastic fluid providing a comprehensive overview of

  18. A community integration strategy based on an improved modularity density increment for large-scale networks

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Zhang, Weitong; Jiao, Licheng; Stolkin, Rustam; Xue, Yu

    2017-03-01

    This paper presents a community integration strategy for large-scale networks, based on pre-partitioning, followed by optimization of an improved modularity density increment Δ D. Our proposed method initially searches for local core nodes in the network, i.e. potential community centers, and expands these communities to include neighbor nodes which have sufficiently high similarity with the core node. In this way, we can effectively exploit the information of the node and structure of the network, to accurately pre-partition the network into communities. Next, we arrange these pre-partitioned communities according to their external connections in descending order. In this way, we can ensure that communities with greater influence are prioritized during the process of community integration. At the same time, this paper proposes an improved modularity density increment Δ D, and shows how to use this as an objective function during the community integration optimization process. During the process of community consolidation, those neighbor communities with few external connections are prioritized for merging, thereby avoiding the fusion errors. Finally, we incorporate global reasoning into the process of local integration. We calculate and compare the improved modularity density increment of each pair of communities, to determine whether or not they should be integrated, effectively improve the accuracy of community integration. Experimental results show that our proposed algorithm can obtain superior community classification results on 5 large-scale networks, as compared with 8 other well known algorithms from the literature.

  19. Can limited area NWP and/or RCM models improve on large scales inside their domain?

    NASA Astrophysics Data System (ADS)

    Mesinger, Fedor; Veljovic, Katarina

    2017-04-01

    In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales

  20. Enablers and Barriers to Large-Scale Uptake of Improved Solid Fuel Stoves: A Systematic Review

    PubMed Central

    Puzzolo, Elisa; Stanistreet, Debbi; Pope, Daniel; Bruce, Nigel G.

    2013-01-01

    Background: Globally, 2.8 billion people rely on household solid fuels. Reducing the resulting adverse health, environmental, and development consequences will involve transitioning through a mix of clean fuels and improved solid fuel stoves (IS) of demonstrable effectiveness. To date, achieving uptake of IS has presented significant challenges. Objectives: We performed a systematic review of factors that enable or limit large-scale uptake of IS in low- and middle-income countries. Methods: We conducted systematic searches through multidisciplinary databases, specialist websites, and consulting experts. The review drew on qualitative, quantitative, and case studies and used standardized methods for screening, data extraction, critical appraisal, and synthesis. We summarized our findings as “factors” relating to one of seven domains—fuel and technology characteristics; household and setting characteristics; knowledge and perceptions; finance, tax, and subsidy aspects; market development; regulation, legislation, and standards; programmatic and policy mechanisms—and also recorded issues that impacted equity. Results: We identified 31 factors influencing uptake from 57 studies conducted in Asia, Africa, and Latin America. All domains matter. Although factors such as offering technologies that meet household needs and save fuel, user training and support, effective financing, and facilitative government action appear to be critical, none guarantee success: All factors can be influential, depending on context. The nature of available evidence did not permit further prioritization. Conclusions: Achieving adoption and sustained use of IS at a large scale requires that all factors, spanning household/community and program/societal levels, be assessed and supported by policy. We propose a planning tool that would aid this process and suggest further research to incorporate an evaluation of effectiveness. Citation: Rehfuess EA, Puzzolo E, Stanistreet D, Pope D, Bruce

  1. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    SciTech Connect

    Wang, Feiyi; Oral, H Sarp; Vazhkudai, Sudharshan S

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  2. Statistical validation of peptide identifications in large-scale proteomics using the target-decoy database search strategy and flexible mixture modeling.

    PubMed

    Choi, Hyungwon; Ghosh, Debashis; Nesvizhskii, Alexey I

    2008-01-01

    Reliable statistical validation of peptide and protein identifications is a top priority in large-scale mass spectrometry based proteomics. PeptideProphet is one of the computational tools commonly used for assessing the statistical confidence in peptide assignments to tandem mass spectra obtained using database search programs such as SEQUEST, MASCOT, or X! TANDEM. We present two flexible methods, the variable component mixture model and the semiparametric mixture model, that remove the restrictive parametric assumptions in the mixture modeling approach of PeptideProphet. Using a control protein mixture data set generated on an linear ion trap Fourier transform (LTQ-FT) mass spectrometer, we demonstrate that both methods improve parametric models in terms of the accuracy of probability estimates and the power to detect correct identifications controlling the false discovery rate to the same degree. The statistical approaches presented here require that the data set contain a sufficient number of decoy (known to be incorrect) peptide identifications, which can be obtained using the target-decoy database search strategy.

  3. Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning.

    PubMed

    Stowell, Dan; Plumbley, Mark D

    2014-01-01

    Automatic species classification of birds from their sound is a computational tool of increasing importance in ecology, conservation monitoring and vocal communication studies. To make classification useful in practice, it is crucial to improve its accuracy while ensuring that it can run at big data scales. Many approaches use acoustic measures based on spectrogram-type data, such as the Mel-frequency cepstral coefficient (MFCC) features which represent a manually-designed summary of spectral information. However, recent work in machine learning has demonstrated that features learnt automatically from data can often outperform manually-designed feature transforms. Feature learning can be performed at large scale and "unsupervised", meaning it requires no manual data labelling, yet it can improve performance on "supervised" tasks such as classification. In this work we introduce a technique for feature learning from large volumes of bird sound recordings, inspired by techniques that have proven useful in other domains. We experimentally compare twelve different feature representations derived from the Mel spectrum (of which six use this technique), using four large and diverse databases of bird vocalisations, classified using a random forest classifier. We demonstrate that in our classification tasks, MFCCs can often lead to worse performance than the raw Mel spectral data from which they are derived. Conversely, we demonstrate that unsupervised feature learning provides a substantial boost over MFCCs and Mel spectra without adding computational complexity after the model has been trained. The boost is particularly notable for single-label classification tasks at large scale. The spectro-temporal activations learned through our procedure resemble spectro-temporal receptive fields calculated from avian primary auditory forebrain. However, for one of our datasets, which contains substantial audio data but few annotations, increased performance is not discernible. We

  4. Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning

    PubMed Central

    Plumbley, Mark D.

    2014-01-01

    Automatic species classification of birds from their sound is a computational tool of increasing importance in ecology, conservation monitoring and vocal communication studies. To make classification useful in practice, it is crucial to improve its accuracy while ensuring that it can run at big data scales. Many approaches use acoustic measures based on spectrogram-type data, such as the Mel-frequency cepstral coefficient (MFCC) features which represent a manually-designed summary of spectral information. However, recent work in machine learning has demonstrated that features learnt automatically from data can often outperform manually-designed feature transforms. Feature learning can be performed at large scale and “unsupervised”, meaning it requires no manual data labelling, yet it can improve performance on “supervised” tasks such as classification. In this work we introduce a technique for feature learning from large volumes of bird sound recordings, inspired by techniques that have proven useful in other domains. We experimentally compare twelve different feature representations derived from the Mel spectrum (of which six use this technique), using four large and diverse databases of bird vocalisations, classified using a random forest classifier. We demonstrate that in our classification tasks, MFCCs can often lead to worse performance than the raw Mel spectral data from which they are derived. Conversely, we demonstrate that unsupervised feature learning provides a substantial boost over MFCCs and Mel spectra without adding computational complexity after the model has been trained. The boost is particularly notable for single-label classification tasks at large scale. The spectro-temporal activations learned through our procedure resemble spectro-temporal receptive fields calculated from avian primary auditory forebrain. However, for one of our datasets, which contains substantial audio data but few annotations, increased performance is not

  5. Improved methods for GRACE-derived groundwater storage change estimation in large-scale agroecosystems

    NASA Astrophysics Data System (ADS)

    Brena, A.; Kendall, A. D.; Hyndman, D. W.

    2013-12-01

    Large-scale agroecosystems are major providers of agricultural commodities and an important component of the world's food supply. In agroecosystems that depend mainly in groundwater, it is well known that their long-term sustainability can be at risk because of water management strategies and climatic trends. The water balance of groundwater-dependent agroecosystems such as the High Plains aquifer (HPA) are often dominated by pumping and irrigation, which enhance hydrological processes such as evapotranspiration, return flow and recharge in cropland areas. This work provides and validates new quantitative groundwater estimation methods for the HPA that combine satellite-based estimates of terrestrial water storage (GRACE), hydrological data assimilation products (NLDAS-2) and in situ measurements of groundwater levels and irrigation rates. The combined data can be used to elucidate the controls of irrigation on the water balance components of agroecosystems, such as crop evapotranspiration, soil moisture deficit and recharge. Our work covers a decade of continuous observations and model estimates from 2003 to 2013, which includes a significant drought since 2011. This study aims to: (1) test the sensitivity of groundwater storage to soil moisture and irrigation, (2) improve estimates of irrigation and soil moisture deficits (3) infer mean values of groundwater recharge across the HPA. The results show (1) significant improvements in GRACE-derived aquifer storage changes using methods that incorporate irrigation and soil moisture deficit data, (2) an acceptable correlation between the observed and estimated aquifer storage time series for the analyzed period, and (3) empirically-estimated annual rates of groundwater recharge that are consistent with previous geochemical and modeling studies. We suggest testing these correction methods in other large-scale agroecosystems with intensive groundwater pumping and irrigation rates.

  6. Getting It "Better": The Importance of Improving Background Questionnaires in International Large-Scale Assessment

    ERIC Educational Resources Information Center

    Rutkowski, Leslie; Rutkowski, David

    2010-01-01

    In addition to collecting achievement data, international large-scale assessment programmes gather auxiliary information from students and schools regarding the context of teaching and learning. In an effort to clarify some of the opacity surrounding international large-scale assessment programmes and the potential problems associated with less…

  7. Cleanliness improvements of NIF (National Ignition Facility) amplifiers as compared to previous large-scale lasers

    SciTech Connect

    Honig, J

    2004-06-09

    Prior to the recent commissioning of the first NIF (National Ignition Facility) beamline, full-scale laser-amplifier-glass cleanliness experiments were performed. Aerosol measurements and obscuration data acquired using a modified flatbed scanner compare favorably to historical large-scale lasers and indicate that NIF is the cleanest large-scale laser built to date.

  8. Getting It "Better": The Importance of Improving Background Questionnaires in International Large-Scale Assessment

    ERIC Educational Resources Information Center

    Rutkowski, Leslie; Rutkowski, David

    2010-01-01

    In addition to collecting achievement data, international large-scale assessment programmes gather auxiliary information from students and schools regarding the context of teaching and learning. In an effort to clarify some of the opacity surrounding international large-scale assessment programmes and the potential problems associated with less…

  9. Large-scale inference of protein tissue origin in gram-positive sepsis plasma using quantitative targeted proteomics

    PubMed Central

    Malmström, Erik; Kilsgård, Ola; Hauri, Simon; Smeds, Emanuel; Herwald, Heiko; Malmström, Lars; Malmström, Johan

    2016-01-01

    The plasma proteome is highly dynamic and variable, composed of proteins derived from surrounding tissues and cells. To investigate the complex processes that control the composition of the plasma proteome, we developed a mass spectrometry-based proteomics strategy to infer the origin of proteins detected in murine plasma. The strategy relies on the construction of a comprehensive protein tissue atlas from cells and highly vascularized organs using shotgun mass spectrometry. The protein tissue atlas was transformed to a spectral library for highly reproducible quantification of tissue-specific proteins directly in plasma using SWATH-like data-independent mass spectrometry analysis. We show that the method can determine drastic changes of tissue-specific protein profiles in blood plasma from mouse animal models with sepsis. The strategy can be extended to several other species advancing our understanding of the complex processes that contribute to the plasma proteome dynamics. PMID:26732734

  10. Large-scale inference of protein tissue origin in gram-positive sepsis plasma using quantitative targeted proteomics.

    PubMed

    Malmström, Erik; Kilsgård, Ola; Hauri, Simon; Smeds, Emanuel; Herwald, Heiko; Malmström, Lars; Malmström, Johan

    2016-01-06

    The plasma proteome is highly dynamic and variable, composed of proteins derived from surrounding tissues and cells. To investigate the complex processes that control the composition of the plasma proteome, we developed a mass spectrometry-based proteomics strategy to infer the origin of proteins detected in murine plasma. The strategy relies on the construction of a comprehensive protein tissue atlas from cells and highly vascularized organs using shotgun mass spectrometry. The protein tissue atlas was transformed to a spectral library for highly reproducible quantification of tissue-specific proteins directly in plasma using SWATH-like data-independent mass spectrometry analysis. We show that the method can determine drastic changes of tissue-specific protein profiles in blood plasma from mouse animal models with sepsis. The strategy can be extended to several other species advancing our understanding of the complex processes that contribute to the plasma proteome dynamics.

  11. Studying large-scale programmes to improve patient safety in whole care systems: challenges for research.

    PubMed

    Benn, Jonathan; Burnett, Susan; Parand, Anam; Pinto, Anna; Iskander, Sandra; Vincent, Charles

    2009-12-01

    Large-scale national and multi-institutional patient safety improvement programmes are being developed in the health care systems of several countries to address problems in the reliability of care delivered to patients. Drawing upon popular collaborative improvement models, these campaigns are ambitious in their aims to improve patient safety in macro-level systems such as whole health care organisations. This article considers the methodological issues involved in conducting research and evaluation of these programmes. Several specific research challenges are outlined, which result from the complexity of longitudinal, multi-level intervention programmes and the variable, highly sociotechnical care systems, with which they interact. Organisational-level improvement programmes are often underspecified due to local variations in context and organisational readiness for improvement work. The result is variable implementation patterns and local adaptations. Programme effects span levels and other boundaries within a system, vary dynamically or are cumulative over time and are problematic to understand in terms of cause and effect, where concurrent external influences exist and the impact upon study endpoints may be mediated by a range of organisational and social factors. We outline the methodological approach to research in the United Kingdom Safer Patients Initiative, to exemplify how some of the challenges for research in this area can be met through a multi-method, longitudinal research design. Specifically, effective research designs must be sensitive to complex variation, through employing multiple qualitative and quantitative measures, collect data over time to understand change and utilise descriptive techniques to capture specific interactions between programme and context for implementation. When considering the long-term, sustained impact of an improvement programme, researchers must consider how to define and measure the capability for continuous safe and

  12. Designing a Large-Scale Multilevel Improvement Initiative: The Improving Performance in Practice Program

    ERIC Educational Resources Information Center

    Margolis, Peter A.; DeWalt, Darren A.; Simon, Janet E.; Horowitz, Sheldon; Scoville, Richard; Kahn, Norman; Perelman, Robert; Bagley, Bruce; Miles, Paul

    2010-01-01

    Improving Performance in Practice (IPIP) is a large system intervention designed to align efforts and motivate the creation of a tiered system of improvement at the national, state, practice, and patient levels, assisting primary-care physicians and their practice teams to assess and measurably improve the quality of care for chronic illness and…

  13. Designing a Large-Scale Multilevel Improvement Initiative: The Improving Performance in Practice Program

    ERIC Educational Resources Information Center

    Margolis, Peter A.; DeWalt, Darren A.; Simon, Janet E.; Horowitz, Sheldon; Scoville, Richard; Kahn, Norman; Perelman, Robert; Bagley, Bruce; Miles, Paul

    2010-01-01

    Improving Performance in Practice (IPIP) is a large system intervention designed to align efforts and motivate the creation of a tiered system of improvement at the national, state, practice, and patient levels, assisting primary-care physicians and their practice teams to assess and measurably improve the quality of care for chronic illness and…

  14. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  15. Large Scale Data Mining to Improve Usability of Data: An Intelligent Archive Testbed

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram; Isaac, David; Yang, Wenli; Morse, Steve

    2005-01-01

    Research in certain scientific disciplines - including Earth science, particle physics, and astrophysics - continually faces the challenge that the volume of data needed to perform valid scientific research can at times overwhelm even a sizable research community. The desire to improve utilization of this data gave rise to the Intelligent Archives project, which seeks to make data archives active participants in a knowledge building system capable of discovering events or patterns that represent new information or knowledge. Data mining can automatically discover patterns and events, but it is generally viewed as unsuited for large-scale use in disciplines like Earth science that routinely involve very high data volumes. Dozens of research projects have shown promising uses of data mining in Earth science, but all of these are based on experiments with data subsets of a few gigabytes or less, rather than the terabytes or petabytes typically encountered in operational systems. To bridge this gap, the Intelligent Archives project is establishing a testbed with the goal of demonstrating the use of data mining techniques in an operationally-relevant environment. This paper discusses the goals of the testbed and the design choices surrounding critical issues that arose during testbed implementation.

  16. Vehicle impoundments improve drinking and driving licence suspension outcomes: Large-scale evidence from Ontario.

    PubMed

    Byrne, Patrick A; Ma, Tracey; Elzohairy, Yoassry

    2016-10-01

    Although vehicle impoundment has become a common sanction for various driving offences, large-scale evaluations of its effectiveness in preventing drinking and driving recidivism are almost non-existent in the peer-reviewed literature. One reason is that impoundment programs have typically been introduced simultaneously with other countermeasures, rendering it difficult to disentangle any observed effects. Previous studies of impoundment effectiveness conducted when such programs were implemented in isolation have typically been restricted to small jurisdictions, making high-quality evaluation difficult. In contrast, Ontario's "long-term" and "seven-day" impoundment programs were implemented in relative isolation, but with tight relationships to already existing drinking and driving suspensions. In this work, we used offence data produced by Ontario's population of over 9 million licensed drivers to perform interrupted time series analysis on drinking and driving recidivism and on rates of driving while suspended for drinking and driving. Our results demonstrate two key findings: (1) impoundment, or its threat, improves compliance with drinking and driving licence suspensions; and (2) addition of impoundment to suspension reduces drinking and driving recidivism, possibly through enhanced suspension compliance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. Improving seasonal forecast through the state of large-scale climate signals

    NASA Astrophysics Data System (ADS)

    Samale, Chiara; Zimmerman, Brian; Giuliani, Matteo; Castelletti, Andrea; Block, Paul

    2017-04-01

    Increasingly uncertain hydrologic regimes are challenging water systems management worldwide, emphasizing the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. In fact, forecasts are usually skillful over short lead time (from hours to days), but predictability tends to decrease on longer lead times. The forecast lead time might be extended by using climate teleconnection, such as El Nino Southern Oscillation (ENSO). Despite the ENSO teleconnection is well defined in some locations such as Western USA and Australia, there is no consensus on how it can be detected and used in other river basins, particularly in Europe, Africa, and Asia. In this work, we propose the use of the Nino Index Phase Analysis for capturing the state of multiple large-scale climate signals (i.e., ENSO, North Atlantic Oscillation, Pacific Decadal Oscillation, Atlantic Multidecadal Oscillation, Dipole Mode Index). This climate state information is used for distinguishing the different phases of the climate signals and for identifying relevant teleconnections between the observations of Sea Surface Temperature (SST) that mostly influence the local hydrologic conditions. The framework is applied to the Lake Como system, a regulated lake in northern Italy which is mainly operated for flood control and irrigation supply. Preliminary results show high correlations between SST and three to six months ahead precipitation in the Lake Como basin. This forecast represents a valuable information to partially anticipate the summer water availability, ultimately supporting the improvement of the Lake Como operations.

  18. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; hide

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  19. Improving large-scale image retrieval through robust aggregation of local descriptors.

    PubMed

    Husain, Syed Sameed; Bober, Miroslaw

    2016-09-27

    Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1% and 35.1%, while CNN-based RVD-W achieve a mAP of 63.5% and 44.8%, all yielding superior performance to the state-of-the-art.

  20. Improvement of Baltic proper water quality using large-scale ecological engineering.

    PubMed

    Stigebrandt, Anders; Gustafsson, Bo G

    2007-04-01

    Eutrophication of the Baltic proper has led to impaired water quality, demonstrated by, e.g., extensive blooming of cyanobacteria during the premium summer holiday season and severe oxygen deficit in the deepwater. Sustainable improvements in water quality by the reduction of phosphorus (P) supplies will take several decades before giving full effects because of large P storages both in soils in the watershed and in the water column and bottom sediments of the Baltic proper. In this article it is shown that drastically improved water quality may be obtained within a few years using large-scale ecological engineering methods. Natural variations in the Baltic proper during the last decades have demonstrated how rapid improvements may be achieved. The present article describes the basic dynamics of P, organic matter, and oxygen in the Baltic proper. It also briefly discusses the advantages and disadvantages of different classes of methods of ecological engineering aimed at restoring the Baltic proper from eutrophication effects. Preliminary computations show that the P content might be halved within a few years if about 100 kg O2 s(-1) are supplied to the upper deepwater. This would require 100 pump stations, each transporting about 100 m3 s(-1) of oxygen-rich so-called winter water from about 50 to 125 m depth where the water is released as a buoyant jet. Each pump station needs a power supply of 0.6 MW. Offshore wind power technology seems mature enough to provide the power needed by the pump stations. The cost to install 100 wind-powered pump stations, each with 0.6 MW power, at about 125-m depth is about 200 million Euros.

  1. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  2. The proteomic landscape of the suprachiasmatic nucleus clock reveals large-scale coordination of key biological processes.

    PubMed

    Chiang, Cheng-Kang; Mehta, Neel; Patel, Abhilasha; Zhang, Peng; Ning, Zhibin; Mayne, Janice; Sun, Warren Y L; Cheng, Hai-Ying M; Figeys, Daniel

    2014-10-01

    The suprachiasmatic nucleus (SCN) acts as the central clock to coordinate circadian oscillations in mammalian behavior, physiology and gene expression. Despite our knowledge of the circadian transcriptome of the SCN, how it impacts genome-wide protein expression is not well understood. Here, we interrogated the murine SCN proteome across the circadian cycle using SILAC-based quantitative mass spectrometry. Of the 2112 proteins that were accurately quantified, 20% (421 proteins) displayed a time-of-day-dependent expression profile. Within this time-of-day proteome, 11% (48 proteins) were further defined as circadian based on a sinusoidal expression pattern with a ∼24 h period. Nine circadianly expressed proteins exhibited 24 h rhythms at the transcript level, with an average time lag that exceeded 8 h. A substantial proportion of the time-of-day proteome exhibited abrupt fluctuations at the anticipated light-to-dark and dark-to-light transitions, and was enriched for proteins involved in several key biological pathways, most notably, mitochondrial oxidative phosphorylation. Additionally, predicted targets of miR-133ab were enriched in specific hierarchical clusters and were inversely correlated with miR133ab expression in the SCN. These insights into the proteomic landscape of the SCN will facilitate a more integrative understanding of cellular control within the SCN clock.

  3. Improvement of methods for large scale sequencing; application to human Xq28

    SciTech Connect

    Gibbs, R.A.; Andersson, B.; Wentland, M.A.

    1994-09-01

    Sequencing of a one-metabase region of Xq28, spanning the FRAXA and IDS loci has been undertaken in order to investigate the practicality of the shotgun approach for large scale sequencing and as a platform to develop improved methods. The efficiency of several steps in the shotgun sequencing strategy has been increased using PCR-based approaches. An improved method for preparation of M13 libraries has been developed. This protocol combines a previously described adaptor-based protocol with the uracil DNA glycosylase (UDG)-cloning procedure. The efficiency of this procedure has been found to be up to 100-fold higher than that of previously used protocols. In addition the novel protocol is more reliable and thus easy to establish in a laboratory. The method has also been adapted for the simultaneous shotgun sequencing of multiple short fragments by concentrating them before library construction is presented. This protocol is suitable for rapid characterization of cDNA clones. A library was constructed from 15 PCR-amplified and concentrated human cDNA inserts, and the insert sequences could easily be identified as separate contigs during the assembly process and the sequence coverage was even along each fragment. Using this strategy, the fine structures of the FraxA and IDS loci have been revealed and several EST homologies indicating novel expressed sequences have been identified. Use of PCR to close repetitive regions that are difficult to clone was tested by determination of the sequence of a cosmid mapping DXS455 in Xq28, containing a polymorphic VNTR. The region containing the VNTR was not represented in the shotgun library, but by designing PCR primers in the sequences flanking the gap and by cloning and sequencing the PCR product, the fine structure of the VNTR has been determined. It was found to be an AT-rich VNTR with a repeated 25-mer at the center.

  4. School Improvement Networks as a Strategy for Large-Scale Education Reform: The Role of Educational Environments

    ERIC Educational Resources Information Center

    Glazer, Joshua L.; Peurach, Donald J.

    2013-01-01

    The development and scale-up of school improvement networks is among the most important educational innovations of the last decade, and current federal, state, and district efforts attempt to use school improvement networks as a mechanism for supporting large-scale change. The potential of improvement networks, however, rests on the extent to…

  5. Infrastructure for large-scale quality-improvement projects: early lessons from North Carolina Improving Performance in Practice.

    PubMed

    Newton, Warren P; Lefebvre, Ann; Donahue, Katrina E; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3) quality-improvement consultants (QICs), or practice facilitators; (4) learning networks; and (5) alignment of incentives. We emphasized a community-based strategy and developing a statewide infrastructure. Results are reported from the first 2 years of the North Carolina Improving Performance in Practice (IPIP) project. A coalition was formed to include professional societies, North Carolina AHEC, Community Care of North Carolina, insurers, and other organizations. Wave One started with 18 practices in 2 of 9 regions of the state. Quality-improvement consultants recruited practices. Over 80 percent of practices attended all quarterly regional meetings. In 9 months, almost all diabetes measures improved, and a bundled asthma measure improved from 33 to 58 percent. Overall, the magnitude of improvement was clinically and statistically significant (P = .001). Quality improvements were maintained on review 1 year later. Wave Two has spread to 103 practices in all 9 regions of the state, with 42 additional practices beginning the enrollment process. Large-scale health care quality improvement is feasible, when broadly supported by statewide leadership and community infrastructure. Practice-collected data and lack of a control group are limitations of the study design. Future priorities include maintaining improved sustainability for practices and communities. Our long-term goal is to transform all 2000 primary-care practices in our state.

  6. Improved design of linear electromagnetic transducers for large-scale vibration energy harvesting

    NASA Astrophysics Data System (ADS)

    Tang, Xiudong; Zuo, Lei; Lin, Teng; Zhang, Peisheng

    2011-03-01

    This paper presents the design and optimization of tubular Linear Electromagnetic Transducers (LETs) with applications to large-scale vibration energy harvesting, such as from vehicle suspensions, tall buildings or long bridges. Four types of LETs are considered and compared, namely, single-layer configuration using axial magnets, double-layer configuration using axial magnets, single-layer configuration using both axial and radial magnets, double-layer configuration using both axial and radial magnets. In order to optimize the LETs, the parameters investigated in this paper include the thickness of the magnets in axial direction and the thickness of the coils in the radial direction. Finite element method is used to analyze the axisymmetric two-dimensional magnetic fields. Both magnetic flux densities Br [T] in the radial direction and power density [W/m3] are calculated. It is found that the parameter optimization can increase the power density of LETs to 2.7 times compared with the initial design [Zuo et al, Smart Materials and Structures, v19 n4, 2010], and the double-layer configuration with both radial and axial magnets can improve the power density to 4.7 times, approaching to the energy dissipation rate of traditional oil dampers. As a case study, we investigate its application to energy-harvesting shock absorbers. For a reasonable retrofit size, the LETs with double-layer configuration and both axial and radial NdFeB magnets can provide a damping coefficient of 1138 N.s/m while harvesting 35.5 W power on the external electric load at 0.25 m/s suspension velocity. If the LET is shorten circuit, it can dissipate energy at the rate of 142.0 W, providing of a damping coefficient of 2276 N.s/m. Practical consideration of number of coil phases is also discussed.

  7. Do large-scale hospital- and system-wide interventions improve patient outcomes: a systematic review.

    PubMed

    Clay-Williams, Robyn; Nosrati, Hadis; Cunningham, Frances C; Hillman, Kenneth; Braithwaite, Jeffrey

    2014-09-03

    While health care services are beginning to implement system-wide patient safety interventions, evidence on the efficacy of these interventions is sparse. We know that uptake can be variable, but we do not know the factors that affect uptake or how the interventions establish change and, in particular, whether they influence patient outcomes. We conducted a systematic review to identify how organisational and cultural factors mediate or are mediated by hospital-wide interventions, and to assess the effects of those factors on patient outcomes. A systematic review was conducted and reported in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Database searches were conducted using MEDLINE from 1946, CINAHL from 1991, EMBASE from 1947, Web of Science from 1934, PsycINFO from 1967, and Global Health from 1910 to September 2012. The Lancet, JAMA, BMJ, BMJ Quality and Safety, The New England Journal of Medicine and Implementation Science were also hand searched for relevant studies published over the last 5 years. Eligible studies were required to focus on organisational determinants of hospital- and system-wide interventions, and to provide patient outcome data before and after implementation of the intervention. Empirical, peer-reviewed studies reporting randomised and non-randomised controlled trials, observational, and controlled before and after studies were included in the review. Six studies met the inclusion criteria. Improved outcomes were observed for studies where outcomes were measured at least two years after the intervention. Associations between organisational factors, intervention success and patient outcomes were undetermined: organisational culture and patient outcomes were rarely measured together, and measures for culture and outcome were not standardised. Common findings show the difficulty of introducing large-scale interventions, and that effective leadership and clinical champions, adequate

  8. Improving the local relevance of large scale water demand predictions: the way forward

    NASA Astrophysics Data System (ADS)

    Bernhard, Jeroen; Reynaud, Arnaud; de Roo, Ad

    2016-04-01

    use and water prices. Subsequently, econometric estimates allow us to make a monetary valuation of water and identify the dominant drivers of domestic and industrial water demand per country. Combined with socio-economic, demographic and climate scenarios we made predictions for future Europe. Since this is a first attempt we obtained mixed results between countries when it comes to data availability and therefore model uncertainty. For some countries we have been able to develop robust predictions based on vast amounts of data while some other countries proved more challenging. We do feel however, that large scale predictions based on regional data are the way forward to provide relevant scientific policy support. In order to improve on our work it is imperative to further expand our database of consistent regional data. We are looking forward to any kind of input and would be very interested in sharing our data to collaborate towards a better understanding of the water use system.

  9. Merging Station Observations with Large-Scale Gridded Data to Improve Hydrological Predictions over Chile

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Verbist, K. M. J.

    2016-12-01

    Hydrological predictions at regional-to-global scales are often hampered by the lack of meteorological forcing data. The use of large-scale gridded meteorological data is able to overcome this limitation, but these data are subject to regional biases and unrealistic values at local scale. This is especially challenging in regions such as Chile, where climate exhibits high spatial heterogeneity as a result of long latitude span and dramatic elevation changes. However, regional station-based observational datasets are not fully exploited and have the potential of constraining biases and spatial patterns. This study aims at adjusting precipitation and temperature estimates from the Princeton University global meteorological forcing (PGF) gridded dataset to improve hydrological simulations over Chile, by assimilating 982 gauges from the Dirección General de Aguas (DGA). To merge station data with the gridded dataset, we use a state-space estimation method to produce optimal gridded estimates, considering both the error of the station measurements and the gridded PGF product. The PGF daily precipitation, maximum and minimum temperature at 0.25° spatial resolution are adjusted for the period of 1979-2010. Precipitation and temperature gauges with long and continuous records (>70% temporal coverage) are selected, while the remaining stations are used for validation. The leave-one-out cross validation verifies the robustness of this data assimilation approach. The merged dataset is then used to force the Variable Infiltration Capacity (VIC) hydrological model over Chile at daily time step which are compared to the observations of streamflow. Our initial results show that the station-merged PGF precipitation effectively captures drizzle and the spatial pattern of storms. Overall the merged dataset has significant improvements compared to the original PGF with reduced biases and stronger inter-annual variability. The invariant spatial pattern of errors between the station

  10. Developmental and Subcellular Organization of Single-Cell C₄ Photosynthesis in Bienertia sinuspersici Determined by Large-Scale Proteomics and cDNA Assembly from 454 DNA Sequencing.

    PubMed

    Offermann, Sascha; Friso, Giulia; Doroshenk, Kelly A; Sun, Qi; Sharpe, Richard M; Okita, Thomas W; Wimmer, Diana; Edwards, Gerald E; van Wijk, Klaas J

    2015-05-01

    Kranz C4 species strictly depend on separation of primary and secondary carbon fixation reactions in different cell types. In contrast, the single-cell C4 (SCC4) species Bienertia sinuspersici utilizes intracellular compartmentation including two physiologically and biochemically different chloroplast types; however, information on identity, localization, and induction of proteins required for this SCC4 system is currently very limited. In this study, we determined the distribution of photosynthesis-related proteins and the induction of the C4 system during development by label-free proteomics of subcellular fractions and leaves of different developmental stages. This was enabled by inferring a protein sequence database from 454 sequencing of Bienertia cDNAs. Large-scale proteome rearrangements were observed as C4 photosynthesis developed during leaf maturation. The proteomes of the two chloroplasts are different with differential accumulation of linear and cyclic electron transport components, primary and secondary carbon fixation reactions, and a triose-phosphate shuttle that is shared between the two chloroplast types. This differential protein distribution pattern suggests the presence of a mRNA or protein-sorting mechanism for nuclear-encoded, chloroplast-targeted proteins in SCC4 species. The combined information was used to provide a comprehensive model for NAD-ME type carbon fixation in SCC4 species.

  11. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0

    NASA Astrophysics Data System (ADS)

    The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.

  12. Transcriptomic and proteomic responses of Serratia marcescens to spaceflight conditions involve large-scale changes in metabolic pathways

    NASA Astrophysics Data System (ADS)

    Wang, Yajuan; Yuan, Yanting; Liu, Jinwen; Su, Longxiang; Chang, De; Guo, Yinghua; Chen, Zhenhong; Fang, Xiangqun; Wang, Junfeng; Li, Tianzhi; Zhou, Lisha; Fang, Chengxiang; Yang, Ruifu; Liu, Changting

    2014-04-01

    The microgravity environment of spaceflight expeditions has been associated with altered microbial responses. This study explores the characterization of Serratia marcescensis grown in a spaceflight environment at the phenotypic, transcriptomic and proteomic levels. From November 1, 2011 to November 17, 2011, a strain of S. marcescensis was sent into space for 398 h on the Shenzhou VIII spacecraft, and ground simulation was performed as a control (LCT-SM213). After the flight, two mutant strains (LCT-SM166 and LCT-SM262) were selected for further analysis. Although no changes in the morphology, post-culture growth kinetics, hemolysis or antibiotic sensitivity were observed, the two mutant strains exhibited significant changes in their metabolic profiles after exposure to spaceflight. Enrichment analysis of the transcriptome showed that the differentially expressed genes of the two spaceflight strains and the ground control strain mainly included those involved in metabolism and degradation. The proteome revealed that changes at the protein level were also associated with metabolic functions, such as glycolysis/gluconeogenesis, pyruvate metabolism, arginine and proline metabolism and the degradation of valine, leucine and isoleucine. In summary S. marcescens showed alterations primarily in genes and proteins that were associated with metabolism under spaceflight conditions, which gave us valuable clues for future research.

  13. Fast and Accurate Protein False Discovery Rates on Large-Scale Proteomics Data Sets with Percolator 3.0.

    PubMed

    The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas

    2016-11-01

    Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.

  14. A framework for scaling up health interventions: lessons from large-scale improvement initiatives in Africa.

    PubMed

    Barker, Pierre M; Reid, Amy; Schall, Marie W

    2016-01-29

    Scaling up complex health interventions to large populations is not a straightforward task. Without intentional, guided efforts to scale up, it can take many years for a new evidence-based intervention to be broadly implemented. For the past decade, researchers and implementers have developed models of scale-up that move beyond earlier paradigms that assumed ideas and practices would successfully spread through a combination of publication, policy, training, and example. Drawing from the previously reported frameworks for scaling up health interventions and our experience in the USA and abroad, we describe a framework for taking health interventions to full scale, and we use two large-scale improvement initiatives in Africa to illustrate the framework in action. We first identified other scale-up approaches for comparison and analysis of common constructs by searching for systematic reviews of scale-up in health care, reviewing those bibliographies, speaking with experts, and reviewing common research databases (PubMed, Google Scholar) for papers in English from peer-reviewed and "gray" sources that discussed models, frameworks, or theories for scale-up from 2000 to 2014. We then analyzed the results of this external review in the context of the models and frameworks developed over the past 20 years by Associates in Process Improvement (API) and the Institute for Healthcare improvement (IHI). Finally, we reflected on two national-scale improvement initiatives that IHI had undertaken in Ghana and South Africa that were testing grounds for early iterations of the framework presented in this paper. The framework describes three core components: a sequence of activities that are required to get a program of work to full scale, the mechanisms that are required to facilitate the adoption of interventions, and the underlying factors and support systems required for successful scale-up. The four steps in the sequence include (1) Set-up, which prepares the ground for

  15. Improvement of determinating seafloor benchmark position with large-scale horizontal heterogeneity in the ocean area

    NASA Astrophysics Data System (ADS)

    Uemura, Y.; Tadokoro, K.; Matsuhiro, K.; Ikuta, R.

    2015-12-01

    The most critical issue in reducing the accuracy of seafloor positioning system, GPS/Acoustic technique, is large-scale thermal gradient of sound-speed structure [Muto et al., 2008] due to the ocean current. For example, Kuroshio Current, near our observation station, forms this structure. To improve the accuracy of seafloor benchmark position (SBP), we need to directly measure the structure frequently, or estimate it from travel time residual. The former, we repeatedly measure the sound-speed at Kuroshio axis using Underway CTD and try to apply analysis method of seafloor positioning [Yasuda et al., 2015 AGU meeting]. The latter, however, we cannot estimate the structure using travel time residual until now. Accordingly, in this study, we focus on azimuthal dependence of Estimated Mean Sound-Speed (EMSS). EMSS is defined as distance between vessel position and estimated SBP divided by travel time. If thermal gradient exists and SBP is true, EMSS should have azimuthal dependence with the assumption of horizontal layered sound-speed structure in our previous analysis method. We use the data at KMC located on the central part of Nankai Trough, Japan on Jan. 28, 2015, because on that day KMC was on the north edge of Kuroshio, where we expect that thermal gradient exists. In our analysis method, the hyper parameter (μ value) weights travel time residual and rate of change of sound speed structure. However, EMSS derived from μ value determined by Ikuta et al. [2008] does not have azimuthal dependence, that is, we cannot estimate thermal gradient. Thus, we expect SBP has a large bias. Therefore, in this study, we use another μ value and examine whether EMSS has azimuthal dependence or not. With the μ value of this study, which is 1 order of magnitude smaller than the previous value, EMSS has azimuthal dependence that is consistent with observation day's thermal gradient. This result shows that we can estimate the thermal gradient adequately. This SBP displaces 25

  16. An Improved Differential Evolution Algorithm and Its Application to Large-Scale Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Jong Choi, Tae; Ahn, Chang Wook

    2017-02-01

    A new differential evolution (DE) algorithm is presented in this paper. The proposed algorithm monitors the evolutionary progress of each individual and assigns appropriate control parameters depends on whether the individual is successfully evolved or not. We conducted the performance evaluation on CEC 2014 benchmark problems and confirmed that the proposed algorithm outperformed than the conventional DE algorithm. In addition, we apply the proposed DE algorithm as an optimization technique of training large scale multilayer perceptron. We conducted the performance evaluation on an artificial neural network that has approximately 1,000 weights and confirmed again that the proposed algorithm performed better than the conventional DE algorithm. As a result, we proposed a new DE algorithm that has better optimization performance for solving large-scale global optimization problems.

  17. CPTAC researchers report first large-scale integrated proteomic and genomic analysis of a human cancer | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    Investigators from the National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium (CPTAC) who comprehensively analyzed 95 human colorectal tumor samples, have determined how gene alterations identified in previous analyses of the same samples are expressed at the protein level. The integration of proteomic and genomic data, or proteogenomics, provides a more comprehensive view of the biological features that drive cancer than genomic analysis alone and may help identify the most important targets for cancer detection and intervention.

  18. Improved tomographic reconstruction of large-scale real-world data by filter optimization.

    PubMed

    Pelt, Daniël M; De Andrade, Vincent

    2017-01-01

    In advanced tomographic experiments, large detector sizes and large numbers of acquired datasets can make it difficult to process the data in a reasonable time. At the same time, the acquired projections are often limited in some way, for example having a low number of projections or a low signal-to-noise ratio. Direct analytical reconstruction methods are able to produce reconstructions in very little time, even for large-scale data, but the quality of these reconstructions can be insufficient for further analysis in cases with limited data. Iterative reconstruction methods typically produce more accurate reconstructions, but take significantly more time to compute, which limits their usefulness in practice. In this paper, we present the application of the SIRT-FBP method to large-scale real-world tomographic data. The SIRT-FBP method is able to accurately approximate the simultaneous iterative reconstruction technique (SIRT) method by the computationally efficient filtered backprojection (FBP) method, using precomputed experiment-specific filters. We specifically focus on the many implementation details that are important for application on large-scale real-world data, and give solutions to common problems that occur with experimental data. We show that SIRT-FBP filters can be computed in reasonable time, even for large problem sizes, and that precomputed filters can be reused for future experiments. Reconstruction results are given for three different experiments, and are compared with results of popular existing methods. The results show that the SIRT-FBP method is able to accurately approximate iterative reconstructions of experimental data. Furthermore, they show that, in practice, the SIRT-FBP method can produce more accurate reconstructions than standard direct analytical reconstructions with popular filters, without increasing the required computation time.

  19. Performance of a large-scale barrier discharge plume improved by an upstream auxiliary barrier discharge

    NASA Astrophysics Data System (ADS)

    Li, Xuechen; Chu, Jingdi; Zhang, Qi; Zhang, Panpan; Jia, Pengying; Geng, Jinling

    2016-11-01

    Enhanced by an upstream auxiliary dielectric barrier discharge (ADBD), a transverse barrier discharge plume with a fairly large scale is generated downstream of a narrow slit. Electrical and optical characteristics are compared for the two discharges with and without the ADBD. Results indicate that the plume with the ADBD is longer, more uniform, and dissipates a higher power. Moreover, its inception voltage is much lower. High-speed imaging presents that the uniform plasma plume with the ADBD comprises a series of moving micro-discharge filaments in a glow regime, which are much smoother than those without the ADBD.

  20. Maximizing the sensitivity and reliability of peptide identification in large-scale proteomic experiments by harnessing multiple search engines.

    PubMed

    Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D

    2010-03-01

    Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.

  1. Large-scale epigenome imputation improves data quality and disease variant enrichment

    PubMed Central

    Ernst, Jason; Kellis, Manolis

    2015-01-01

    With hundreds of epigenomic maps, the opportunity arises to exploit the correlated nature of epigenetic signals, across both marks and samples, for large-scale prediction of additional datasets. Here, we undertake epigenome imputation by leveraging such correlations through an ensemble of regression trees. We impute 4,315 high-resolution signal maps, of which 26% are also experimentally observed. Imputed signal tracks show overall similarity to observed signals, and surpass experimental datasets in consistency, recovery of gene annotations, and enrichment for disease-associated variants. We use the imputed data to detect low quality experimental datasets, to find genomic sites with unexpected epigenomic signals, to define high-priority marks for new experiments, and to delineate chromatin states in 127 reference epigenomes spanning diverse tissues and cell types. Our imputed datasets provide the most comprehensive human regulatory annotation to date, and our approach and the ChromImpute software constitute a useful complement to large-scale experimental mapping of epigenomic information. PMID:25690853

  2. Re-fraction: a machine learning approach for deterministic identification of protein homologues and splice variants in large-scale MS-based proteomics.

    PubMed

    Yang, Pengyi; Humphrey, Sean J; Fazakerley, Daniel J; Prior, Matthew J; Yang, Guang; James, David E; Yang, Jean Yee-Hwa

    2012-05-04

    A key step in the analysis of mass spectrometry (MS)-based proteomics data is the inference of proteins from identified peptide sequences. Here we describe Re-Fraction, a novel machine learning algorithm that enhances deterministic protein identification. Re-Fraction utilizes several protein physical properties to assign proteins to expected protein fractions that comprise large-scale MS-based proteomics data. This information is then used to appropriately assign peptides to specific proteins. This approach is sensitive, highly specific, and computationally efficient. We provide algorithms and source code for the current version of Re-Fraction, which accepts output tables from the MaxQuant environment. Nevertheless, the principles behind Re-Fraction can be applied to other protein identification pipelines where data are generated from samples fractionated at the protein level. We demonstrate the utility of this approach through reanalysis of data from a previously published study and generate lists of proteins deterministically identified by Re-Fraction that were previously only identified as members of a protein group. We find that this approach is particularly useful in resolving protein groups composed of splice variants and homologues, which are frequently expressed in a cell- or tissue-specific manner and may have important biological consequences.

  3. An improved method for large-scale preparation of negatively and positively supercoiled plasmid DNA.

    PubMed

    Barth, Marita; Dederich, Debra; Dedon, Peter

    2009-07-01

    A rigorous understanding of the biological function of superhelical tension in cellular DNA requires the development of new tools and model systems for study. To this end, an ethidium bromide[#x02013]free method has been developed to prepare large quantities of either negatively or positively super-coiled plasmid DNA. The method is based upon the known effects of ionic strength on the direction of binding of DNA to an archaeal histone, rHMfB, with low and high salt concentrations leading to positive and negative DNA supercoiling, respectively. In addition to fully optimized conditions for large-scale (>500 microg) supercoiling reactions, the method is advantageous in that it avoids the use of mutagenic ethidium bromide, is applicable to chemically modified plasmid DNA substrates, and produces both positively and negatively supercoiled DNA using a single set of reagents.

  4. Improving Performance of Power Systems with Large-scale Variable Generation Additions

    SciTech Connect

    Makarov, Yuri V.; Etingov, Pavel V.; Samaan, Nader A.; Lu, Ning; Ma, Jian; Subbarao, Krishnappa; Du, Pengwei; Kannberg, Landis D.

    2012-07-22

    A power system with large-scale renewable resources, like wind and solar generation, creates significant challenges to system control performance and reliability characteristics because of intermittency and uncertainties associated with variable generation. It is important to quantify these uncertainties, and then incorporate this information into decision-making processes and power system operations. This paper presents three approaches to evaluate the flexibility needed from conventional generators and other resources in the presence of variable generation as well as provide this flexibility from a non-traditional resource – wide area energy storage system. These approaches provide operators with much-needed information on the likelihood and magnitude of ramping and capacity problems, and the ability to dispatch available resources in response to such problems.

  5. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing

    PubMed Central

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-01-01

    Motivation: Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. Methods: We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the ‘learning to rank’ framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. Results: DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. Availability and Implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307646

  6. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing.

    PubMed

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-06-15

    Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the 'learning to rank' framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. The software is available upon request. zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  7. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    SciTech Connect

    Jakob, Christian

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  8. On the improvement for charging large-scale flexible electrostatic actuators

    NASA Astrophysics Data System (ADS)

    Liao, Hsu-Ching; Chen, Han-Long; Su, Yu-Hao; Chen, Yu-Chi; Ko, Wen-Ching; Liou, Chang-Ho; Wu, Wen-Jong; Lee, Chih-Kung

    2011-04-01

    Recently, the development of flexible electret based electrostatic actuator has been widely discussed. The devices was shown to have high sound quality, energy saving, flexible structure and can be cut to any shape. However, achieving uniform charge on the electret diaphragm is one of the most critical processes needed to have the speaker ready for large-scale production. In this paper, corona discharge equipment contains multi-corona probes and grid bias was set up to inject spatial charges within the electret diaphragm. The optimal multi-corona probes system was adjusted to achieve uniform charge distribution of electret diaphragm. The processing conditions include the distance between the corona probes, the voltages of corona probe and grid bias, etc. We assembled the flexible electret loudspeakers first and then measured their sound pressure and beam pattern. The uniform charge distribution within the electret diaphragm based flexible electret loudspeaker provided us with the opportunity to shape the loudspeaker arbitrarily and to tailor the sound distribution per specifications request. Some of the potential futuristic applications for this device such as sound poster, smart clothes, and sound wallpaper, etc. were discussed as well.

  9. Robust nonlinear controller design to improve the stability of a large scale photovoltaic system

    NASA Astrophysics Data System (ADS)

    Islam, Gazi Md. Saeedul

    Recently interest in photovoltaic (PV) power generation systems is increasing rapidly and the installation of large PV systems or large groups of PV systems that are interconnected with the utility grid is accelerating despite their high cost and low efficiency due to environmental issues and depletions of fossil fuels. Most of the photovoltaic (PV) applications are grid connected. Existing power systems may face the stability problems because of the high penetration of PV systems to the grid. Therefore, more stringent grid codes are being imposed by the energy regulatory bodies for grid integration of PV plants. Recent grid codes dictate that PV plants need to stay connected with the power grid during the network faults because of their increased power penetration level. This requires the system to have large disturbance rejection capability to protect the system and provide dynamic grid support. This thesis presents a new control method to enhance the steady-state and transient stabilities of a grid connected large scale photovoltaic (PV) system. A new control coordination scheme is also presented to reduce the power mismatch during the fault condition in order to limit the fault currents, which is one of the salient features of this study. The performance of the overall system is analyzed using laboratory standard power system simulation software PSCAD/EMTDC.

  10. Substantial improvements in large-scale redocking and screening using the novel HYDE scoring function.

    PubMed

    Schneider, Nadine; Hindle, Sally; Lange, Gudrun; Klein, Robert; Albrecht, Jürgen; Briem, Hans; Beyer, Kristin; Claußen, Holger; Gastreich, Marcus; Lemmen, Christian; Rarey, Matthias

    2012-06-01

    The HYDE scoring function consistently describes hydrogen bonding, the hydrophobic effect and desolvation. It relies on HYdration and DEsolvation terms which are calibrated using octanol/water partition coefficients of small molecules. We do not use affinity data for calibration, therefore HYDE is generally applicable to all protein targets. HYDE reflects the Gibbs free energy of binding while only considering the essential interactions of protein-ligand complexes. The greatest benefit of HYDE is that it yields a very intuitive atom-based score, which can be mapped onto the ligand and protein atoms. This allows the direct visualization of the score and consequently facilitates analysis of protein-ligand complexes during the lead optimization process. In this study, we validated our new scoring function by applying it in large-scale docking experiments. We could successfully predict the correct binding mode in 93% of complexes in redocking calculations on the Astex diverse set, while our performance in virtual screening experiments using the DUD dataset showed significant enrichment values with a mean AUC of 0.77 across all protein targets with little or no structural defects. As part of these studies, we also carried out a very detailed analysis of the data that revealed interesting pitfalls, which we highlight here and which should be addressed in future benchmark datasets.

  11. Substantial improvements in large-scale redocking and screening using the novel HYDE scoring function

    NASA Astrophysics Data System (ADS)

    Schneider, Nadine; Hindle, Sally; Lange, Gudrun; Klein, Robert; Albrecht, Jürgen; Briem, Hans; Beyer, Kristin; Claußen, Holger; Gastreich, Marcus; Lemmen, Christian; Rarey, Matthias

    2012-06-01

    The HYDE scoring function consistently describes hydrogen bonding, the hydrophobic effect and desolvation. It relies on HYdration and DEsolvation terms which are calibrated using octanol/water partition coefficients of small molecules. We do not use affinity data for calibration, therefore HYDE is generally applicable to all protein targets. HYDE reflects the Gibbs free energy of binding while only considering the essential interactions of protein-ligand complexes. The greatest benefit of HYDE is that it yields a very intuitive atom-based score, which can be mapped onto the ligand and protein atoms. This allows the direct visualization of the score and consequently facilitates analysis of protein-ligand complexes during the lead optimization process. In this study, we validated our new scoring function by applying it in large-scale docking experiments. We could successfully predict the correct binding mode in 93% of complexes in redocking calculations on the Astex diverse set, while our performance in virtual screening experiments using the DUD dataset showed significant enrichment values with a mean AUC of 0.77 across all protein targets with little or no structural defects. As part of these studies, we also carried out a very detailed analysis of the data that revealed interesting pitfalls, which we highlight here and which should be addressed in future benchmark datasets.

  12. Improving irrigation efficiency in Italian apple orchards: A large-scale approach

    NASA Astrophysics Data System (ADS)

    Della Chiesa, Stefano; la Cecilia, Daniele; Niedrist, Georg; Hafner, Hansjörg; Thalheimer, Martin; Tappeiner, Ulrike

    2016-04-01

    Nord-Italian region South Tyrol is Europe's largest apple growing area. In order to enable an economically relevant fruit quality and quantity the relative dry climate of the region 450-700mm gets compensated by a large scale irrigation management which until now follows old, traditional rights. Due to ongoing climatic changes and rising public sensitivity toward sustainable usage of water resources, irrigation practices are more and more critically discussed. In order to establish an objective and quantitative base of information to optimise irrigation practice, 17 existing microclimatic stations were upgraded with soil moisture and soil water potential sensors. As a second information layer a data set of 20,000 soil analyses has been geo-referenced and spatialized using a modern geostatistical method. Finally, to assess whether the zones with shallow aquifer influence soil water availability, data of 70 groundwater depth measuring stations were retrieved. The preliminary results highlight that in many locations in particular in the valley bottoms irrigation largely exceeds plant water needs because either the shallow aquifer provides sufficient water supply by capillary rise processes into the root zone or irrigation is applied without accounting for the specific soil properties.

  13. Proteomic analysis of breast tumors confirms the mRNA intrinsic molecular subtypes using different classifiers: a large-scale analysis of fresh frozen tissue samples.

    PubMed

    Waldemarson, Sofia; Kurbasic, Emila; Krogh, Morten; Cifani, Paolo; Berggård, Tord; Borg, Åke; James, Peter

    2016-06-29

    Breast cancer is a complex and heterogeneous disease that is usually characterized by histological parameters such as tumor size, cellular arrangements/rearrangments, necrosis, nuclear grade and the mitotic index, leading to a set of around twenty subtypes. Together with clinical markers such as hormone receptor status, this classification has considerable prognostic value but there is a large variation in patient response to therapy. Gene expression profiling has provided molecular profiles characteristic of distinct subtypes of breast cancer that reflect the divergent cellular origins and degree of progression. Here we present a large-scale proteomic and transcriptomic profiling study of 477 sporadic and hereditary breast cancer tumors with matching mRNA expression analysis. Unsupervised hierarchal clustering was performed and selected proteins from large-scale tandem mass spectrometry (MS/MS) analysis were transferred into a highly multiplexed targeted selected reaction monitoring assay to classify tumors using a hierarchal cluster and support vector machine with leave one out cross-validation. The subgroups formed upon unsupervised clustering agree very well with groups found at transcriptional level; however, the classifiers (genes or their respective protein products) differ almost entirely between the two datasets. In-depth analysis shows clear differences in pathways unique to each type, which may lie behind their different clinical outcomes. Targeted mass spectrometry analysis and supervised clustering correlate very well with subgroups determined by RNA classification and show convincing agreement with clinical parameters. This work demonstrates the merits of protein expression profiling for breast cancer stratification. These findings have important implications for the use of genomics and expression analysis for the prediction of protein expression, such as receptor status and drug target expression. The highly multiplexed MS assay is easily implemented

  14. Large Scale Comparative Proteomics of a Chloroplast Clp Protease Mutant Reveals Folding Stress, Altered Protein Homeostasis, and Feedback Regulation of Metabolism*

    PubMed Central

    Zybailov, Boris; Friso, Giulia; Kim, Jitae; Rudella, Andrea; Rodríguez, Verenice Ramírez; Asakura, Yukari; Sun, Qi; van Wijk, Klaas J.

    2009-01-01

    The clpr2-1 mutant is delayed in development due to reduction of the chloroplast ClpPR protease complex. To understand the role of Clp proteases in plastid biogenesis and homeostasis, leaf proteomes of young seedlings of clpr2-1 and wild type were compared using large scale mass spectrometry-based quantification using an LTQ-Orbitrap and spectral counting with significance determined by G-tests. Virtually only chloroplast-localized proteins were significantly affected, indicating that the molecular phenotype was confined to the chloroplast. A comparative chloroplast stromal proteome analysis of fully developed plants was used to complement the data set. Chloroplast unfoldase ClpB3 was strongly up-regulated in both young and mature leaves, suggesting widespread and persistent protein folding stress. The importance of ClpB3 in the clp2-1 mutant was demonstrated by the observation that a CLPR2 and CLPB3 double mutant was seedling-lethal. The observed up-regulation of chloroplast chaperones and protein sorting components further illustrated destabilization of protein homeostasis. Delayed rRNA processing and up-regulation of a chloroplast DEAD box RNA helicase and polynucleotide phosphorylase, but no significant change in accumulation of ribosomal subunits, suggested a bottleneck in ribosome assembly or RNA metabolism. Strong up-regulation of a chloroplast translational regulator TypA/BipA GTPase suggested a specific response in plastid gene expression to the distorted homeostasis. The stromal proteases PreP1,2 were up-regulated, likely constituting compensation for reduced Clp protease activity and possibly shared substrates between the ClpP and PreP protease systems. The thylakoid photosynthetic apparatus was decreased in the seedlings, whereas several structural thylakoid-associated plastoglobular proteins were strongly up-regulated. Two thylakoid-associated reductases involved in isoprenoid and chlorophyll synthesis were up-regulated reflecting feedback from rate

  15. Large-scale proteome analysis of abscisic acid and ABSCISIC ACID INSENSITIVE3-dependent proteins related to desiccation tolerance in Physcomitrella patens

    SciTech Connect

    Yotsui, Izumi; Serada, Satoshi; Naka, Tetsuji; Saruhashi, Masashi; Taji, Teruaki; Hayashi, Takahisa; Quatrano, Ralph S.; Sakata, Yoichi

    2016-03-18

    tolerance might have evolved in ancestral land plants before the separation of bryophytes and vascular plants. - Highlights: • Large-scale proteomics highlighted proteins related to plant desiccation tolerance. • The proteins were regulated by both the phytohormone ABA and ABI3. • The proteins accumulated in desiccation tolerant cells of both Arabidopsis and moss. • Evolutionary origin of regulatory machinery for desiccation tolerance is proposed.

  16. Leveraging Technology to Improve Developmental Mathematics Course Completion: Evaluation of a Large-Scale Intervention

    ERIC Educational Resources Information Center

    Wladis, Claire; Offenholley, Kathleen; George, Michael

    2014-01-01

    This study hypothesizes that course passing rates in remedial mathematics classes can be improved through early identification of at-risk students using a department-wide midterm, followed by a mandated set of online intervention assignments incorporating immediate and elaborate feedback for all students identified as "at-risk" by their…

  17. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    PubMed Central

    Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin

    2017-01-01

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line. PMID:28335482

  18. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images.

    PubMed

    Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin

    2017-03-18

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  19. Large-scale inhomogeneities may improve the cosmic concordance of supernovae.

    PubMed

    Amendola, Luca; Kainulainen, Kimmo; Marra, Valerio; Quartin, Miguel

    2010-09-17

    We reanalyze the supernova data from the Union Compilation including the weak-lensing effects caused by inhomogeneities. We compute the lensing probability distribution function for each background solution described by the parameters Ω(M), Ω(Λ), and w in the presence of inhomogeneities, approximately modeled with a single-mass population of halos. We then perform a likelihood analysis in the parameter space of Friedmann-Lemaître-Robertson-Walker models and compare our results with the standard approach. We find that the inclusion of lensing can move the best-fit model significantly towards the cosmic concordance of the flat Lambda-Cold Dark Matter model, improving the agreement with the constraints coming from the cosmic microwave background and baryon acoustic oscillations.

  20. Design of energy storage system to improve inertial response for large scale PV generation

    DOE PAGES

    Wang, Xiaoyu; Yue, Meng

    2016-07-01

    With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated usingmore » PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.« less

  1. Design of energy storage system to improve inertial response for large scale PV generation

    SciTech Connect

    Wang, Xiaoyu; Yue, Meng

    2016-07-01

    With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated using PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.

  2. Design of energy storage system to improve inertial response for large scale PV generation

    SciTech Connect

    Wang, Xiaoyu; Yue, Meng

    2016-07-01

    With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated using PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.

  3. RNA interference-mediated knockdown of SIRT1 and/or SIRT2 in melanoma: Identification of downstream targets by large-scale proteomics analysis.

    PubMed

    Wilking-Busch, Melissa J; Ndiaye, Mary A; Liu, Xiaoqi; Ahmad, Nihal

    2017-09-05

    Melanoma is the most notorious and fatal of all skin cancers and the existing treatment options have not been proven to effectively manage this neoplasm, especially the metastatic disease. Sirtuin (SIRT) proteins have been shown to be differentially expressed in melanoma. We have shown that SIRTs 1 and 2 were overexpressed in melanoma and inhibition of SIRT1 imparts anti-proliferative responses in human melanoma cells. To elucidate the impact of SIRT 1 and/or 2 in melanoma, we created stable knockdowns of SIRTs 1, 2, and their combination using shRNA mediated RNA interference in A375 human melanoma cells. We found that SIRT1 and SIRT1&2 combination knockdown caused a decreased cellular proliferation in melanoma cells. Further, the knockdown of SIRT 1 and/or 2 resulted in a decreased colony formation in melanoma cells. To explore the downstream targets of SIRTs 1 and/or 2, we employed a label-free quantitative nano-LC-MS/MS proteomics analysis using the stable lines. We found aberrant levels of proteins involved in many vital cellular processes, including cytoskeletal organization, ribosomal activity, oxidative stress response, and angiogenesis. These findings provide clear evidence of cellular systems undergoing alterations in response to sirtuin inhibition, and have unveiled several excellent candidates for future study. Melanoma is the deadliest form of skin cancer, due to its aggressive nature, metastatic potential, and a lack of sufficient treatment options for advanced disease. Therefore, detailed investigations into the molecular mechanisms of melanoma growth and progression are needed. In the search for candidate genes to serve as therapeutic targets, the sirtuins show promise as they have been found to be upregulated in melanoma and they regulate a large number of proteins involved in cellular processes known to affect tumor growth, such as DNA damage repair, cell cycle arrest, and apoptosis. In this study, we used a large-scale label-free comparative

  4. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation

    PubMed Central

    Benning, Amirta; Ghaleb, Maisoon; Suokas, Anu; Dixon-Woods, Mary; Dawson, Jeremy; Barber, Nick; Franklin, Bryony Dean; Girling, Alan; Hemming, Karla; Carmalt, Martin; Rudge, Gavin; Naicker, Thirumalai; Nwulu, Ugochi; Choudhury, Sopna

    2011-01-01

    Objectives To conduct an independent evaluation of the first phase of the Health Foundation’s Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design Mixed method evaluation involving five substudies, before and after design. Setting NHS hospitals in the United Kingdom. Participants Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention The SPI1 was a compound (multi-component) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration—monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items)—there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission

  5. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation.

    PubMed

    Benning, Amirta; Ghaleb, Maisoon; Suokas, Anu; Dixon-Woods, Mary; Dawson, Jeremy; Barber, Nick; Franklin, Bryony Dean; Girling, Alan; Hemming, Karla; Carmalt, Martin; Rudge, Gavin; Naicker, Thirumalai; Nwulu, Ugochi; Choudhury, Sopna; Lilford, Richard

    2011-02-03

    To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Mixed method evaluation involving five substudies, before and after design. NHS hospitals in the United Kingdom. Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. The SPI1 was a compound (multi-component) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P < 0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration--monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items)--there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control

  6. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    NASA Astrophysics Data System (ADS)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  7. Social Aspects of Photobooks: Improving Photobook Authoring from Large-Scale Multimedia Analysis

    NASA Astrophysics Data System (ADS)

    Sandhaus, Philipp; Boll, Susanne

    With photo albums we aim to capture personal events such as weddings, vacations, and parties of family and friends. By arranging photo prints, captions and paper souvenirs such as tickets over the pages of a photobook we tell a story to capture and share our memories. The photo memories captured in such a photobook tell us much about the content and the relevance of the photos for the user. The way in which we select photos and arrange them in the photo album reveal a lot about the events, persons and places on the photos: captions describe content, closeness and arrangement of photos express relations between photos and their content and especially about the social relations of the author and the persons present in the album. Nowadays the process of photo album authoring has become digital, photos and texts can be arranged and laid out with the help of authoring tools in a digital photo album which can be printed as a physical photobook. In this chapter we present results of the analysis of a large repository of digitally mastered photobooks to learn about their social aspects. We explore to which degree a social aspect can be identified and how expressive and vivid different classes of photobooks are. The photobooks are anonymized, real world photobooks from customers of our industry partner CeWe Color. The knowledge gained from this social photobook analysis is meant both to better understand how people author their photobooks and to improve the automatic selection of and layout of photobooks.

  8. Improving the local solution accuracy of large-scale digital image-based finite element analyses.

    PubMed

    Charras, G T; Guldberg, R E

    2000-02-01

    Digital image-based finite element modeling (DIBFEM) has become a widely utilized approach for efficiently meshing complex biological structures such as trabecular bone. While DIBFEM can provide accurate predictions of apparent mechanical properties, its application to simulate local phenomena such as tissue failure or adaptation has been limited by high local solution errors at digital model boundaries. Furthermore, refinement of digital meshes does not necessarily reduce local maximum errors. The purpose of this study was to evaluate the potential to reduce local mean and maximum solution errors in digital meshes using a post-processing filtration method. The effectiveness of a three-dimensional, boundary-specific filtering algorithm was found to be mesh size dependent. Mean absolute and maximum errors were reduced for meshes with more than five elements through the diameter of a cantilever beam considered representative of a single trabecula. Furthermore, mesh refinement consistently decreased errors for filtered solutions but not necessarily for non-filtered solutions. Models with more than five elements through the beam diameter yielded absolute mean errors of less than 15% for both Von Mises stress and maximum principal strain. When applied to a high-resolution model of trabecular bone microstructure, boundary filtering produced a more continuous solution distribution and reduced the predicted maximum stress by 30%. Boundary-specific filtering provides a simple means of improving local solution accuracy while retaining the model generation and numerical storage efficiency of the DIBFEM technique.

  9. Large scale analysis of the mutational landscape in HT-SELEX improves aptamer discovery

    PubMed Central

    Hoinka, Jan; Berezhnoy, Alexey; Dao, Phuong; Sauna, Zuben E.; Gilboa, Eli; Przytycka, Teresa M.

    2015-01-01

    High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut—a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the ‘parent’ sequence and AptaCluster—an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. PMID:25870409

  10. Multimethod study of a large-scale programme to improve patient safety using a harm-free care approach

    PubMed Central

    Power, Maxine; Brewster, Liz; Parry, Gareth; Brotherton, Ailsa; Minion, Joel; Ozieranski, Piotr; McNicol, Sarah; Harrison, Abigail; Dixon-Woods, Mary

    2016-01-01

    Objectives We aimed to evaluate whether a large-scale two-phase quality improvement programme achieved its aims and to characterise the influences on achievement. Setting National Health Service (NHS) in England. Participants NHS staff. Interventions The programme sought to (1) develop a shared national, regional and locally aligned safety focus for 4 high-cost, high volume harms; (2) establish a new measurement system based on a composite measure of ‘harm-free’ care and (3) deliver improved outcomes. Phase I involved a quality improvement collaborative intended to involve 100 organisations; phase II used financial incentives for data collection. Measures Multimethod evaluation of the programme. In phase I, analysis of regional plans and of rates of data submission and clinical outcomes reported to the programme. A concurrent process evaluation was conducted of phase I, but only data on submission rates and clinical outcomes were available for phase II. Results A context of extreme policy-related structural turbulence impacted strongly on phase I. Most regions' plans did not demonstrate full alignment with the national programme; most fell short of recruitment targets and attrition in attendance at the collaborative meetings occurred over time. Though collaborative participants saw the principles underlying the programme as attractive, useful and innovative, they often struggled to convert enthusiasm into change. Developing the measurement system was arduous, yet continued to be met by controversy. Data submission rates remained patchy throughout phase I but improved in reach and consistency in phase II in response to financial incentives. Some evidence of improvement in clinical outcomes over time could be detected but was hard to interpret owing to variability in the denominators. Conclusions These findings offer important lessons for large-scale improvement programmes, particularly when they seek to develop novel concepts and measures. External contexts may

  11. Customized Mobile Apps: Improving data collection methods in large-scale field works in Finnish Lapland

    NASA Astrophysics Data System (ADS)

    Kupila, Juho

    2017-04-01

    Since the 1990s, a huge amount of data related to the groundwater and soil has been collected in several regional projects in Finland. EU -funded project "The coordination of groundwater protection and aggregates industry in Finnish Lapland, phase II" started in July 2016 and it covers the last unstudied areas in these projects in Finland. Project is carried out by Geological Survey of Finland (GTK), University of Oulu and Finnish Environment Institute and the main topic is to consolidate the groundwater protection and extractable use of soil resource in Lapland area. As earlier, several kinds of studies are also carried out throughout this three-year research and development project. These include e.g. drilling with setting up of groundwater observation wells, GPR-survey and many kinds of point-type observations, like sampling and general mapping on the field. Due to size of a study area (over 80 000 km2, about one quarter of a total area of Finland), improvement of the field work methods has become essential. To the general observation on the field, GTK has developed a specific mobile applications for Android -devices. With these Apps, data can be easily collected for example from a certain groundwater area and then uploaded directly to the GTK's database. Collected information may include sampling data, photos, layer observations, groundwater data etc. and it is all linked to the current GPS-location. New data is also easily available for post-processing. In this project the benefits of these applications will be field-tested and e.g. ergonomics, economy and usability in general will be taken account and related to the other data collecting methods, like working with heavy fieldwork laptops. Although these Apps are designed for usage in GTK's projects, they are free to download from Google Play for anyone interested. Geological Survey of Finland has the main role in this project with support from national and local authorities and stakeholders. Project is funded

  12. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  13. Parallel proteomics to improve coverage and confidence in the partially annotated Oryctolagus cuniculus mitochondrial proteome.

    PubMed

    White, Melanie Y; Brown, David A; Sheng, Simon; Cole, Robert N; O'Rourke, Brian; Van Eyk, Jennifer E

    2011-02-01

    The ability to decipher the dynamic protein component of any system is determined by the inherent limitations of the technologies used, the complexity of the sample, and the existence of an annotated genome. In the absence of an annotated genome, large-scale proteomic investigations can be technically difficult. Yet the functional and biological species differences across animal models can lead to selection of partially or nonannotated organisms over those with an annotated genome. The outweighing of biology over technology leads us to investigate the degree to which a parallel approach can facilitate proteome coverage in the absence of complete genome annotation. When studying species without complete genome annotation, a particular challenge is how to ensure high proteome coverage while meeting the bioinformatic stringencies of high-throughput proteomics. A protein inventory of Oryctolagus cuniculus mitochondria was created by overlapping "protein-centric" and "peptide-centric" one-dimensional and two-dimensional liquid chromatography strategies; with additional partitioning into membrane-enriched and soluble fractions. With the use of these five parallel approaches, 2934 unique peptides were identified, corresponding to 558 nonredundant protein groups. 230 of these proteins (41%) were identified by only a single technical approach, confirming the need for parallel techniques to improve annotation. To determine the extent of coverage, a side-by-side comparison with human and mouse cardiomyocyte mitochondrial studies was performed. A nonredundant list of 995 discrete proteins was compiled, of which 244 (25%) were common across species. The current investigation identified 142 unique protein groups, the majority of which were detected here by only one technical approach, in particular peptide- and protein-centric two-dimensional liquid chromatography. Although no single approach achieved more than 40% coverage, the combination of three approaches (protein- and

  14. Improving Disease Prediction by Incorporating Family Disease History in Risk Prediction Models with Large-Scale Genetic Data.

    PubMed

    Gim, Jungsoo; Kim, Wonji; Kwak, Soo Heon; Choi, Hosik; Park, Changyi; Park, Kyong Soo; Kwon, Sunghoon; Park, Taesung; Won, Sungho

    2017-09-12

    Despite the many successes of genome-wide association studies (GWAS), the known susceptibility variants identified by GWAS have modest effect sizes, leading to notable skepticism about the effectiveness of building a risk prediction model from large-scale genetic data. However, in contrast to genetic variants, the family history of diseases has been largely accepted as an important risk factor in clinical diagnosis and risk prediction. Nevertheless, the complicated structures of the family history of diseases have limited their application in clinical practice. Here, we developed a new method that enables incorporation of the general family history of diseases with a liability threshold model, and propose a new analysis strategy for risk prediction with penalized regression analysis that incorporates both large numbers of genetic variants and clinical risk factors. Application of our model to type 2 diabetes (T2D) in the Korean population (1846 cases and 1846 controls) demonstrated that single nucleotide polymorphisms accounted for 32.5% of the variation explained by the predicted risk scores in the test data set, and incorporation of family history led to an additional 6.3% improvement in prediction. Our results illustrate that the family medical history is valuable information on the variation of complex diseases and improves prediction performance. Copyright © 2017, Genetics.

  15. Performance of the improved version of monte Carlo code A3MCNP for large-scale shielding problems.

    PubMed

    Omura, M; Miyake, Y; Hasegawa, T; Ueki, K; Sato, O; Haghighat, A; Sjoden, G E

    2005-01-01

    A3MCNP (Automatic Adjoint Accelerated MCNP) is a revised version of the MCNP Monte Carlo code, which automatically prepares variance reduction parameters for the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using a deterministic 'importance' (or adjoint) function, CADIS performs source and transport biasing within the weight-window technique. The current version of A3MCNP uses the three-dimensional (3-D) Sn transport TORT code to determine a 3-D importance function distribution. Based on simulation of several real-life problems, it is demonstrated that A3MCNP provides precise calculation results with a remarkably short computation time by using the proper and objective variance reduction parameters. However, since the first version of A3MCNP provided only a point source configuration option for large-scale shielding problems, such as spent-fuel transport casks, a large amount of memory may be necessary to store enough points to properly represent the source. Hence, we have developed an improved version of A3MCNP (referred to as A3MCNPV) which has a volumetric source configuration option. This paper describes the successful use of A3MCNPV for a concrete cask neutron and gamma-ray shielding problem, and a PWR dosimetry problem.

  16. A large-scale screen for artificial selection in maize identifies candidate agronomic loci for domestication and crop improvement.

    PubMed

    Yamasaki, Masanori; Tenaillon, Maud I; Bi, Irie Vroh; Schroeder, Steve G; Sanchez-Villeda, Hector; Doebley, John F; Gaut, Brandon S; McMullen, Michael D

    2005-11-01

    Maize (Zea mays subsp mays) was domesticated from teosinte (Z. mays subsp parviglumis) through a single domestication event in southern Mexico between 6000 and 9000 years ago. This domestication event resulted in the original maize landrace varieties, which were spread throughout the Americas by Native Americans and adapted to a wide range of environmental conditions. Starting with landraces, 20th century plant breeders selected inbred lines of maize for use in hybrid maize production. Both domestication and crop improvement involved selection of specific alleles at genes controlling key morphological and agronomic traits, resulting in reduced genetic diversity relative to unselected genes. Here, we sequenced 1095 maize genes from a sample of 14 inbred lines and chose 35 genes with zero sequence diversity as potential targets of selection. These 35 genes were then sequenced in a sample of diverse maize landraces and teosintes and tested for selection. Using two statistical tests, we identified eight candidate genes. Extended gene sequencing of these eight candidate loci confirmed that six were selected throughout the gene, and the remaining two exhibited evidence of selection in the 3' portion of each gene. The selected genes have functions consistent with agronomic selection for nutritional quality, maturity, and productivity. Our large-scale screen for artificial selection allows identification of genes of potential agronomic importance even when gene function and the phenotype of interest are unknown.

  17. Improving the Communication Pattern in Matrix-Vector Operations for Large Scale-Free Graphs by Disaggregation

    SciTech Connect

    Kuhlemann, Verena; Vassilevski, Panayot S.

    2013-10-28

    Matrix-vector multiplication is the key operation in any Krylov-subspace iteration method. We are interested in Krylov methods applied to problems associated with the graph Laplacian arising from large scale-free graphs. Furthermore, computations with graphs of this type on parallel distributed-memory computers are challenging. This is due to the fact that scale-free graphs have a degree distribution that follows a power law, and currently available graph partitioners are not efficient for such an irregular degree distribution. The lack of a good partitioning leads to excessive interprocessor communication requirements during every matrix-vector product. Here, we present an approach to alleviate this problem based on embedding the original irregular graph into a more regular one by disaggregating (splitting up) vertices in the original graph. The matrix-vector operations for the original graph are performed via a factored triple matrix-vector product involving the embedding graph. And even though the latter graph is larger, we are able to decrease the communication requirements considerably and improve the performance of the matrix-vector product.

  18. Large-Scale Proteomics of the Cassava Storage Root and Identification of a Target Gene to Reduce Postharvest Deterioration[C][W][OPEN

    PubMed Central

    Vanderschuren, Hervé; Nyaboga, Evans; Poon, Jacquelyne S.; Baerenfaller, Katja; Grossmann, Jonas; Hirsch-Hoffmann, Matthias; Kirchgessner, Norbert; Nanni, Paolo; Gruissem, Wilhelm

    2014-01-01

    Cassava (Manihot esculenta) is the most important root crop in the tropics, but rapid postharvest physiological deterioration (PPD) of the root is a major constraint to commercial cassava production. We established a reliable method for image-based PPD symptom quantification and used label-free quantitative proteomics to generate an extensive cassava root and PPD proteome. Over 2600 unique proteins were identified in the cassava root, and nearly 300 proteins showed significant abundance regulation during PPD. We identified protein abundance modulation in pathways associated with oxidative stress, phenylpropanoid biosynthesis (including scopoletin), the glutathione cycle, fatty acid α-oxidation, folate transformation, and the sulfate reduction II pathway. Increasing protein abundances and enzymatic activities of glutathione-associated enzymes, including glutathione reductases, glutaredoxins, and glutathione S-transferases, indicated a key role for ascorbate/glutathione cycles. Based on combined proteomics data, enzymatic activities, and lipid peroxidation assays, we identified glutathione peroxidase as a candidate for reducing PPD. Transgenic cassava overexpressing a cytosolic glutathione peroxidase in storage roots showed delayed PPD and reduced lipid peroxidation as well as decreased H2O2 accumulation. Quantitative proteomics data from ethene and phenylpropanoid pathways indicate additional gene candidates to further delay PPD. Cassava root proteomics data are available at www.pep2pro.ethz.ch for easy access and comparison with other proteomics data. PMID:24876255

  19. First Large-Scale Proteogenomic Study of Breast Cancer Provides Insight into Potential Therapeutic Targets | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    News Release: May 25, 2016 — Building on data from The Cancer Genome Atlas (TCGA) project, a multi-institutional team of scientists has completed the first large-scale “proteogenomic” study of breast cancer, linking DNA mutations to protein signaling and helping pinpoint the genes that drive cancer.

  20. NAEP Validity Studies: Improving the Information Value of Performance Items in Large Scale Assessments. Working Paper No. 2003-08

    ERIC Educational Resources Information Center

    Pearson, P. David; Garavaglia, Diane R.

    2003-01-01

    The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…

  1. High-throughput database search and large-scale negative polarity liquid chromatography-tandem mass spectrometry with ultraviolet photodissociation for complex proteomic samples.

    PubMed

    Madsen, James A; Xu, Hua; Robinson, Michelle R; Horton, Andrew P; Shaw, Jared B; Giles, David K; Kaoud, Tamer S; Dalby, Kevin N; Trent, M Stephen; Brodbelt, Jennifer S

    2013-09-01

    The use of ultraviolet photodissociation (UVPD) for the activation and dissociation of peptide anions is evaluated for broader coverage of the proteome. To facilitate interpretation and assignment of the resulting UVPD mass spectra of peptide anions, the MassMatrix database search algorithm was modified to allow automated analysis of negative polarity MS/MS spectra. The new UVPD algorithms were developed based on the MassMatrix database search engine by adding specific fragmentation pathways for UVPD. The new UVPD fragmentation pathways in MassMatrix were rigorously and statistically optimized using two large data sets with high mass accuracy and high mass resolution for both MS(1) and MS(2) data acquired on an Orbitrap mass spectrometer for complex Halobacterium and HeLa proteome samples. Negative mode UVPD led to the identification of 3663 and 2350 peptides for the Halo and HeLa tryptic digests, respectively, corresponding to 655 and 645 peptides that were unique when compared with electron transfer dissociation (ETD), higher energy collision-induced dissociation, and collision-induced dissociation results for the same digests analyzed in the positive mode. In sum, 805 and 619 proteins were identified via UVPD for the Halobacterium and HeLa samples, respectively, with 49 and 50 unique proteins identified in contrast to the more conventional MS/MS methods. The algorithm also features automated charge determination for low mass accuracy data, precursor filtering (including intact charge-reduced peaks), and the ability to combine both positive and negative MS/MS spectra into a single search, and it is freely open to the public. The accuracy and specificity of the MassMatrix UVPD search algorithm was also assessed for low resolution, low mass accuracy data on a linear ion trap. Analysis of a known mixture of three mitogen-activated kinases yielded similar sequence coverage percentages for UVPD of peptide anions versus conventional collision-induced dissociation of

  2. High-throughput Database Search and Large-scale Negative Polarity Liquid Chromatography–Tandem Mass Spectrometry with Ultraviolet Photodissociation for Complex Proteomic Samples*

    PubMed Central

    Madsen, James A.; Xu, Hua; Robinson, Michelle R.; Horton, Andrew P.; Shaw, Jared B.; Giles, David K.; Kaoud, Tamer S.; Dalby, Kevin N.; Trent, M. Stephen; Brodbelt, Jennifer S.

    2013-01-01

    The use of ultraviolet photodissociation (UVPD) for the activation and dissociation of peptide anions is evaluated for broader coverage of the proteome. To facilitate interpretation and assignment of the resulting UVPD mass spectra of peptide anions, the MassMatrix database search algorithm was modified to allow automated analysis of negative polarity MS/MS spectra. The new UVPD algorithms were developed based on the MassMatrix database search engine by adding specific fragmentation pathways for UVPD. The new UVPD fragmentation pathways in MassMatrix were rigorously and statistically optimized using two large data sets with high mass accuracy and high mass resolution for both MS1 and MS2 data acquired on an Orbitrap mass spectrometer for complex Halobacterium and HeLa proteome samples. Negative mode UVPD led to the identification of 3663 and 2350 peptides for the Halo and HeLa tryptic digests, respectively, corresponding to 655 and 645 peptides that were unique when compared with electron transfer dissociation (ETD), higher energy collision-induced dissociation, and collision-induced dissociation results for the same digests analyzed in the positive mode. In sum, 805 and 619 proteins were identified via UVPD for the Halobacterium and HeLa samples, respectively, with 49 and 50 unique proteins identified in contrast to the more conventional MS/MS methods. The algorithm also features automated charge determination for low mass accuracy data, precursor filtering (including intact charge-reduced peaks), and the ability to combine both positive and negative MS/MS spectra into a single search, and it is freely open to the public. The accuracy and specificity of the MassMatrix UVPD search algorithm was also assessed for low resolution, low mass accuracy data on a linear ion trap. Analysis of a known mixture of three mitogen-activated kinases yielded similar sequence coverage percentages for UVPD of peptide anions versus conventional collision-induced dissociation of

  3. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  4. Large-scale public-private partnership for improving TB-HIV services for high-risk groups in India.

    PubMed

    Kane, S; Dewan, P K; Gupta, D; Wi, T; Das, A; Singh, A; Bitra, G; Chauhan, L S; Dallabetta, G

    2010-08-01

    In India, the Revised National Tuberculosis Control Programme and a large-scale human immunodeficiency virus (HIV) prevention project partnered to deliver enhanced TB screening services for HIV high-risk groups. Between July 2007 and September 2008, 134 non-governmental organisations (NGOs) operating 412 clinics and community-based outreach services, screened 124 371 high-risk individuals and referred 3749 (3.01%) for TB diagnosis. Of these, 849 (23%) were diagnosed with TB. India has translated this model into national policy through a public-sector funded TB-HIV partnership scheme for NGOs serving high-risk groups.

  5. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. Toward system-level understanding of baculovirus–host cell interactions: from molecular fundamental studies to large-scale proteomics approaches

    PubMed Central

    Monteiro, Francisca; Carinhas, Nuno; Carrondo, Manuel J. T.; Bernal, Vicente; Alves, Paula M.

    2012-01-01

    Baculoviruses are insect viruses extensively exploited as eukaryotic protein expression vectors. Molecular biology studies have provided exciting discoveries on virus–host interactions, but the application of omic high-throughput techniques on the baculovirus–insect cell system has been hampered by the lack of host genome sequencing. While a broader, systems-level analysis of biological responses to infection is urgently needed, recent advances on proteomic studies have yielded new insights on the impact of infection on the host cell. These works are reviewed and critically assessed in the light of current biological knowledge of the molecular biology of baculoviruses and insect cells. PMID:23162544

  7. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  8. Scalable methodology for large scale building energy improvement: Relevance of calibration in model-based retrofit analysis

    SciTech Connect

    Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane; Muehleisen, Ralph T.; Guzowski, Leah

    2015-05-01

    The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustrates both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.

  9. Proteomics: a biotechnology tool for crop improvement.

    PubMed

    Eldakak, Moustafa; Milad, Sanaa I M; Nawar, Ali I; Rohila, Jai S

    2013-01-01

    A sharp decline in the availability of arable land and sufficient supply of irrigation water along with a continuous steep increase in food demands have exerted a pressure on farmers to produce more with fewer resources. A viable solution to release this pressure is to speed up the plant breeding process by employing biotechnology in breeding programs. The majority of biotechnological applications rely on information generated from various -omic technologies. The latest outstanding improvements in proteomic platforms and many other but related advances in plant biotechnology techniques offer various new ways to encourage the usage of these technologies by plant scientists for crop improvement programs. A combinatorial approach of accelerated gene discovery through genomics, proteomics, and other associated -omic branches of biotechnology, as an applied approach, is proving to be an effective way to speed up the crop improvement programs worldwide. In the near future, swift improvements in -omic databases are becoming critical and demand immediate attention for the effective utilization of these techniques to produce next-generation crops for the progressive farmers. Here, we have reviewed the recent advances in proteomics, as tools of biotechnology, which are offering great promise and leading the path toward crop improvement for sustainable agriculture.

  10. Proteomics: a biotechnology tool for crop improvement

    PubMed Central

    Eldakak, Moustafa; Milad, Sanaa I. M.; Nawar, Ali I.; Rohila, Jai S.

    2013-01-01

    A sharp decline in the availability of arable land and sufficient supply of irrigation water along with a continuous steep increase in food demands have exerted a pressure on farmers to produce more with fewer resources. A viable solution to release this pressure is to speed up the plant breeding process by employing biotechnology in breeding programs. The majority of biotechnological applications rely on information generated from various -omic technologies. The latest outstanding improvements in proteomic platforms and many other but related advances in plant biotechnology techniques offer various new ways to encourage the usage of these technologies by plant scientists for crop improvement programs. A combinatorial approach of accelerated gene discovery through genomics, proteomics, and other associated -omic branches of biotechnology, as an applied approach, is proving to be an effective way to speed up the crop improvement programs worldwide. In the near future, swift improvements in -omic databases are becoming critical and demand immediate attention for the effective utilization of these techniques to produce next-generation crops for the progressive farmers. Here, we have reviewed the recent advances in proteomics, as tools of biotechnology, which are offering great promise and leading the path toward crop improvement for sustainable agriculture. PMID:23450788

  11. Atomistic Origin of Brittle Failure of Boron Carbide from Large-Scale Reactive Dynamics Simulations: Suggestions toward Improved Ductility

    NASA Astrophysics Data System (ADS)

    An, Qi; Goddard, William A.

    2015-09-01

    Ceramics are strong, but their low fracture toughness prevents extended engineering applications. In particular, boron carbide (B4C ), the third hardest material in nature, has not been incorporated into many commercial applications because it exhibits anomalous failure when subjected to hypervelocity impact. To determine the atomistic origin of this brittle failure, we performed large-scale (˜200 000 atoms /cell ) reactive-molecular-dynamics simulations of shear deformations of B4C , using the quantum-mechanics-derived reactive force field simulation. We examined the (0001 )/⟨10 1 ¯ 0 ⟩ slip system related to deformation twinning and the (01 1 ¯ 1 ¯ )/⟨1 ¯ 101 ⟩ slip system related to amorphous band formation. We find that brittle failure in B4C arises from formation of higher density amorphous bands due to fracture of the icosahedra, a unique feature of these boron based materials. This leads to negative pressure and cavitation resulting in crack opening. Thus, to design ductile materials based on B4C we propose alloying aimed at promoting shear relaxation through intericosahedral slip that avoids icosahedral fracture.

  12. Large-scale proteomics combined with transgenic experiments demonstrates an important role of jasmonic acid in potassium deficiency response in wheat and rice.

    PubMed

    Li, Gezi; Wu, Yufang; Liu, Guoyu; Xiao, Xianghong; Wang, Pengfei; Gao, Tian; Xu, Mengjun; Han, Qiaoxia; Wang, Yonghua; Guo, Tiancai; Kang, Guozhang

    2017-08-18

    Potassium (K+) is the most abundant inorganic cation in plants, and molecular dissection of K+ deficiency has received considerable interest in order to minimize K+ fertilizer input and develop high quality K+-efficient crops. However, the molecular mechanism of plant responses to K+ deficiency is still poorly understood. In this study, two-week-old bread wheat seedlings grown hydroponically in Hoagland solution were transferred to K+-free conditions for 8 d, and their root and leaf proteome profiles were assessed using the iTRAQ proteome method. Over 4,000 unique proteins were identified, and 818 K+-responsive protein species showed significant differences in abundance. The differentially expressed protein species were associated with diverse functions and exhibited organ-specific differences. Most of the differentially expressed protein species related to hormone synthesis were involved in jasmonic acid (JA) synthesis and the upregulated abundance of JA synthesis-related enzymes could result in the increased JA concentrations. Abundance of allene oxide synthase (TaAOS), one key JA synthesis-related enzyme, was significantly increased in K+-deficient wheat seedlings, and its overexpression markedly increased concentrations of K+ and JA, altered the transcription levels of some genes encoding K+-responsive protein species, as well as enhanced the tolerance of rice plants to low K+ or K+ deficiency. Moreover, rice AOS mutant (osaos) exhibited more sensitivity to low K+ or K+ deficiency. Our findings could highlight the importance of JA in K+ deficiency, and imply a network of molecular processes underlying plant responses to K+ deficiency. Copyright © 2017, The American Society for Biochemistry and Molecular Biology.

  13. Large-scale proteome analysis of abscisic acid and ABSCISIC ACID INSENSITIVE3-dependent proteins related to desiccation tolerance in Physcomitrella patens.

    PubMed

    Yotsui, Izumi; Serada, Satoshi; Naka, Tetsuji; Saruhashi, Masashi; Taji, Teruaki; Hayashi, Takahisa; Quatrano, Ralph S; Sakata, Yoichi

    2016-03-18

    Desiccation tolerance is an ancestral feature of land plants and is still retained in non-vascular plants such as bryophytes and some vascular plants. However, except for seeds and spores, this trait is absent in vegetative tissues of vascular plants. Although many studies have focused on understanding the molecular basis underlying desiccation tolerance using transcriptome and proteome approaches, the critical molecular differences between desiccation tolerant plants and non-desiccation plants are still not clear. The moss Physcomitrella patens cannot survive rapid desiccation under laboratory conditions, but if cells of the protonemata are treated by the phytohormone abscisic acid (ABA) prior to desiccation, it can survive 24 h exposure to desiccation and regrow after rehydration. The desiccation tolerance induced by ABA (AiDT) is specific to this hormone, but also depends on a plant transcription factor ABSCISIC ACID INSENSITIVE3 (ABI3). Here we report the comparative proteomic analysis of AiDT between wild type and ABI3 deleted mutant (Δabi3) of P. patens using iTRAQ (Isobaric Tags for Relative and Absolute Quantification). From a total of 1980 unique proteins that we identified, only 16 proteins are significantly altered in Δabi3 compared to wild type after desiccation following ABA treatment. Among this group, three of the four proteins that were severely affected in Δabi3 tissue were Arabidopsis orthologous genes, which were expressed in maturing seeds under the regulation of ABI3. These included a Group 1 late embryogenesis abundant (LEA) protein, a short-chain dehydrogenase, and a desiccation-related protein. Our results suggest that at least three of these proteins expressed in desiccation tolerant cells of both Arabidopsis and the moss are very likely to play important roles in acquisition of desiccation tolerance in land plants. Furthermore, our results suggest that the regulatory machinery of ABA- and ABI3-mediated gene expression for desiccation

  14. Large-scale discovery of conopeptides and conoproteins in the injectable venom of a fish-hunting cone snail using a combined proteomic and transcriptomic approach.

    PubMed

    Violette, Aude; Biass, Daniel; Dutertre, Sébastien; Koua, Dominique; Piquemal, David; Pierrat, Fabien; Stöcklin, Reto; Favreau, Philippe

    2012-09-18

    Predatory marine snails of the genus Conus use venom containing a complex mixture of bioactive peptides to subdue their prey. Here we report on a comprehensive analysis of the protein content of injectable venom from Conus consors, an indo-pacific fish-hunting cone snail. By matching MS/MS data against an extensive set of venom gland transcriptomic mRNA sequences, we identified 105 components out of ~400 molecular masses detected in the venom. Among them, we described new conotoxins belonging to the A, M- and O1-superfamilies as well as a novel superfamily of disulphide free conopeptides. A high proportion of the deduced sequences (36%) corresponded to propeptide regions of the A- and M-superfamilies, raising the question of their putative role in injectable venom. Enzymatic digestion of higher molecular mass components allowed the identification of new conkunitzins (~7 kDa) and two proteins in the 25 and 50 kDa molecular mass ranges respectively characterised as actinoporin-like and hyaluronidase-like protein. These results provide the most exhaustive and accurate proteomic overview of an injectable cone snail venom to date, and delineate the major protein families present in the delivered venom. This study demonstrates the feasibility of this analytical approach and paves the way for transcriptomics-assisted strategies in drug discovery.

  15. Large-scale phosphotyrosine proteomic profiling of rat renal collecting duct epithelium reveals predominance of proteins involved in cell polarity determination.

    PubMed

    Zhao, Boyang; Knepper, Mark A; Chou, Chung-Lin; Pisitkun, Trairak

    2012-01-01

    Although extensive phosphoproteomic information is available for renal epithelial cells, previous emphasis has been on phosphorylation of serines and threonines with little focus on tyrosine phosphorylation. Here we have carried out large-scale identification of phosphotyrosine sites in pervanadate-treated native inner medullary collecting ducts of rat, with a view towards identification of physiological processes in epithelial cells that are potentially regulated by tyrosine phosphorylation. The method combined antibody-based affinity purification of tyrosine phosphorylated peptides coupled with immobilized metal ion chromatography to enrich tyrosine phosphopeptides, which were identified by LC-MS/MS. A total of 418 unique tyrosine phosphorylation sites in 273 proteins were identified. A large fraction of these sites have not been previously reported on standard phosphoproteomic databases. All results are accessible via an online database: http://helixweb.nih.gov/ESBL/Database/iPY/. Analysis of surrounding sequences revealed four overrepresented motifs: [D/E]xxY*, Y*xxP, DY*, and Y*E, where the asterisk symbol indicates the site of phosphorylation. These motifs plus contextual information, integrated using the NetworKIN tool, suggest that the protein tyrosine kinases involved include members of the insulin- and ephrin-receptor kinase families. Analysis of the gene ontology (GO) terms and KEGG pathways whose protein elements are overrepresented in our data set point to structures involved in epithelial cell-cell and cell-matrix interactions ("adherens junction," "tight junction," and "focal adhesion") and to components of the actin cytoskeleton as major sites of tyrosine phosphorylation in these cells. In general, these findings mesh well with evidence that tyrosine phosphorylation plays a key role in epithelial polarity determination.

  16. Leveraging Genomics Software to Improve Proteomics Results

    SciTech Connect

    Fodor, I K; Nelson, D O

    2005-09-06

    Rigorous data analysis techniques are essential in quantifying the differential expression of proteins in biological samples of interest. Statistical methods from the microarray literature were applied to the analysis of two-dimensional difference gel electrophoresis (2-D DIGE) proteomics experiments, in the context of technical variability studies involving human plasma. Protein expression measurements were corrected to account for observed intensity-dependent biases within gels, and normalized to mitigate observed gel to gel variations. The methods improved upon the results achieved using the best currently available 2-D DIGE proteomics software. The spot-wise protein variance was reduced by 10% and the number of apparently differentially expressed proteins was reduced by over 50%.

  17. Large-scale preparation of shape controlled SnO and improved capacitance for supercapacitors: from nanoclusters to square microplates

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Ji, Hongmei; Zhu, Feng; Chen, Zhi; Yang, Yang; Jiang, Xuefan; Pinto, João; Yang, Gang

    2013-07-01

    Here, we first provide a facile ultrasonic-assisted synthesis of SnO using SnCl2 and the organic solvent of ethanolamine (ETA). The moderate alkalinity of ETA and ultrasound play very important roles in the synthesis of SnO. After the hydrolysis of the intermediate of ETA-Sn(ii), the as-synthesized SnO nanoclusters undergo assembly, amalgamation, and preferential growth to microplates in hydrothermal treatment. The as-synthesized SnO was characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), ultraviolet-visible absorption spectroscopy (UV-vis) and X-ray diffraction (XRD). To explore its potential applications in energy storage, SnO was fabricated into a supercapacitor electrode and characterized by cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS), and galvanostatic charge-discharge measurements. The as-synthesized SnO exhibits remarkable pseudocapacitive activity including high specific capacitance (208.9 F g-1 at 0.1 A g-1), good rate capability (65.8 F g-1 at 40 A g-1), and excellent cycling stability (retention 119.3% after 10 000 cycles) for application in supercapacitors. The capacitive behavior of SnO with various crystal morphologies was observed by fitted EIS using an equivalent circuit. The novel synthetic route for SnO is a convenient and potential way to large-scale production of microplates which is expected to be applicable in the synthesis of other metal oxide nanoparticles.Here, we first provide a facile ultrasonic-assisted synthesis of SnO using SnCl2 and the organic solvent of ethanolamine (ETA). The moderate alkalinity of ETA and ultrasound play very important roles in the synthesis of SnO. After the hydrolysis of the intermediate of ETA-Sn(ii), the as-synthesized SnO nanoclusters undergo assembly, amalgamation, and preferential growth to microplates in hydrothermal treatment. The as-synthesized SnO was characterized by scanning

  18. Large scale scientific computing

    SciTech Connect

    Deuflhard, P. ); Engquist, B. )

    1987-01-01

    This book presents papers on large scale scientific computing. It includes: Initial value problems of ODE's and parabolic PDE's; Boundary value problems of ODE's and elliptic PDE's; Hyperbolic PDE's; Inverse problems; Optimization and optimal control problems; and Algorithm adaptation on supercomputers.

  19. Improved Large-Scale Slope Analysis on Mars Based on Correlation of Slopes Derived with Different Baselines

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Wu, B.

    2017-07-01

    The surface slopes of planetary bodies are important factors for exploration missions, such as landing site selection and rover manoeuvre. Generally, high-resolution digital elevation models (DEMs) such as those generated from the HiRISE images on Mars are preferred to generate detailed slopes with a better fidelity of terrain features. Unfortunately, high-resolution datasets normally only cover small area and are not always available. While lower resolution datasets, such as MOLA, provide global coverage of the Martian surface. Slopes generated from the low-resolution DEM will be based on a large baseline and be smoothed from the real situation. In order to carry out slope analysis at large scale on Martian surface based low-resolution data such as MOLA data, while alleviating the smoothness problem of slopes due to its low resolution, this paper presents an amplifying function of slopes derived from low-resolution DEMs based on the relationships between DEM resolutions and slopes. First, slope maps are derived from the HiRISE DEM (meter-level resolution DEM generated from HiRISE images) and a series of down-sampled HiRISE DEMs. The latter are used to simulate low-resolution DEMs. Then the high-resolution slope map is down- sampled to the same resolution with the slope map from the lower-resolution DEMs. Thus, a comparison can be conducted pixel-wise. For each pixel on the slope map derived from the lower-resolution DEM, it can reach the same value with the down-sampled HiRISE slope by multiplying an amplifying factor. Seven sets of HiRISE images with representative terrain types are used for correlation analysis. It shows that the relationship between the amplifying factors and the original MOLA slopes can be described by the exponential function. Verifications using other datasets show that after applying the proposed amplifying function, the updated slope maps give better representations of slopes on Martian surface compared with the original slopes.

  20. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  1. Improving Child Maltreatment Detection Systems: A Large-Scale Case Study Involving Health, Social Services, and School Professionals

    ERIC Educational Resources Information Center

    Cerezo, M.A.; Pons-Salvador, G.

    2004-01-01

    Objectives:: The purpose of this 5-year study was to improve detection in two consecutive phases: (a) To close the gap between the number of identified cases and the actual number of cases of child abuse by increasing detection; and (b) To increase the possibility of a broader spectrum of detection. Method:: The Balearic Islands (one of the…

  2. Improving Child Maltreatment Detection Systems: A Large-Scale Case Study Involving Health, Social Services, and School Professionals

    ERIC Educational Resources Information Center

    Cerezo, M.A.; Pons-Salvador, G.

    2004-01-01

    Objectives:: The purpose of this 5-year study was to improve detection in two consecutive phases: (a) To close the gap between the number of identified cases and the actual number of cases of child abuse by increasing detection; and (b) To increase the possibility of a broader spectrum of detection. Method:: The Balearic Islands (one of the…

  3. 30 Days Wild: Development and Evaluation of a Large-Scale Nature Engagement Campaign to Improve Well-Being

    PubMed Central

    Richardson, Miles; Cormack, Adam; McRobert, Lucy; Underhill, Ralph

    2016-01-01

    There is a need to increase people’s engagement with and connection to nature, both for human well-being and the conservation of nature itself. In order to suggest ways for people to engage with nature and create a wider social context to normalise nature engagement, The Wildlife Trusts developed a mass engagement campaign, 30 Days Wild. The campaign asked people to engage with nature every day for a month. 12,400 people signed up for 30 Days Wild via an online sign-up with an estimated 18,500 taking part overall, resulting in an estimated 300,000 engagements with nature by participants. Samples of those taking part were found to have sustained increases in happiness, health, connection to nature and pro-nature behaviours. With the improvement in health being predicted by the improvement in happiness, this relationship was mediated by the change in connection to nature. PMID:26890891

  4. 30 Days Wild: Development and Evaluation of a Large-Scale Nature Engagement Campaign to Improve Well-Being.

    PubMed

    Richardson, Miles; Cormack, Adam; McRobert, Lucy; Underhill, Ralph

    2016-01-01

    There is a need to increase people's engagement with and connection to nature, both for human well-being and the conservation of nature itself. In order to suggest ways for people to engage with nature and create a wider social context to normalise nature engagement, The Wildlife Trusts developed a mass engagement campaign, 30 Days Wild. The campaign asked people to engage with nature every day for a month. 12,400 people signed up for 30 Days Wild via an online sign-up with an estimated 18,500 taking part overall, resulting in an estimated 300,000 engagements with nature by participants. Samples of those taking part were found to have sustained increases in happiness, health, connection to nature and pro-nature behaviours. With the improvement in health being predicted by the improvement in happiness, this relationship was mediated by the change in connection to nature.

  5. Learning from the design and implementation of large-scale programs to improve infant and young child feeding.

    PubMed

    Baker, Jean; Sanghvi, Tina; Hajeebhoy, Nemat; Abrha, Teweldebrhan Hailu

    2013-09-01

    Improving and sustaining infant and young child feeding (IYCF) practices requires multiple interventions reaching diverse target groups over a sustained period of time. These interventions, together with improved maternal nutrition, are the cornerstones for realizing a lifetime of benefitsfrom investing in nutrition during the 1000 day period. Summarize major lessons from Alive & Thrive's work to improve IYCF in three diverse settings--Bangladesh, Ethiopia, and Vietnam. Draw lessons from reports, studies, surveys, routine monitoring, and discussions on the drivers of successful design and implementation of lYCF strategies. Teaming up with carefully selected implementing partners with strong commitment is a critical first step. As programs move to implementation at scale, strategic systems strengthening is needed to avoid operational bottlenecks. Performance of adequate IYCF counseling takes more than training; it requires rational task allocation, substantial follow up, and recognition of frontline workers. Investing in community demand for IYCF services should be prioritized, specifically through social mobilization and relevant media for multiple audiences. Design of behavior change communication and its implementation must be flexible and responsive to shifts in society's use of media and other social changes. Private sector creative agencies and media companies are well equipped to market IYCF. Scaling up core IYCF interventions and maintaining quality are facilitated by national-level coordinating and information exchange mechanisms using evidence on quality and coverage. It is possible to deliver quality IYCF interventions at scale, while creating new knowledge, tools, and approaches that can be adapted by others

  6. Improved large-scale hydrological modelling through the assimilation of streamflow and downscaled satellite soil moisture observations

    NASA Astrophysics Data System (ADS)

    López López, Patricia; Wanders, Niko; Schellekens, Jaap; Renzullo, Luigi J.; Sutanudjaja, Edwin H.; Bierkens, Marc F. P.

    2016-07-01

    The coarse spatial resolution of global hydrological models (typically >  0.25°) limits their ability to resolve key water balance processes for many river basins and thus compromises their suitability for water resources management, especially when compared to locally tuned river models. A possible solution to the problem may be to drive the coarse-resolution models with locally available high-spatial-resolution meteorological data as well as to assimilate ground-based and remotely sensed observations of key water cycle variables. While this would improve the resolution of the global model, the impact of prediction accuracy remains largely an open question. In this study, we investigate the impact of assimilating streamflow and satellite soil moisture observations on the accuracy of global hydrological model estimations, when driven by either coarse- or high-resolution meteorological observations in the Murrumbidgee River basin in Australia. To this end, a 0.08° resolution version of the PCR-GLOBWB global hydrological model is forced with downscaled global meteorological data (downscaled from 0.5° to 0.08° resolution) obtained from the WATCH Forcing Data methodology applied to ERA-Interim (WFDEI) and a local high-resolution, gauging-station-based gridded data set (0.05°). Downscaled satellite-derived soil moisture (downscaled from ˜  0.5° to 0.08° resolution) from the remote observation system AMSR-E and streamflow observations collected from 23 gauging stations are assimilated using an ensemble Kalman filter. Several scenarios are analysed to explore the added value of data assimilation considering both local and global meteorological data. Results show that the assimilation of soil moisture observations results in the largest improvement of the model estimates of streamflow. The joint assimilation of both streamflow and downscaled soil moisture observations leads to further improvement in streamflow simulations (20 % reduction in RMSE

  7. Large Scale Nonlinear Programming.

    DTIC Science & Technology

    1978-06-15

    KEY WORDS (Conhinu. as, t.n.t.. aid. if nic••iary aid ld.ntify by block n,a,b.r) L. In,~~~ IP!CIE LARGE SCALE OPTIMIZATION APPLICATIONS OF NONLINEAR ... NONLINEAR PROGRAMMING by Garth P. McCormick 1. Introduction The general mathematical programming ( optimization ) problem can be stated in the following form...because the difficulty in solving a general nonlinear optimization problem has a~ much to do with the nature of the functions involved as it does with the

  8. Improved large-scale hydrological modelling through the assimilation of streamflow and downscaled satellite soil moisture observations.

    NASA Astrophysics Data System (ADS)

    López López, Patricia; Wanders, Niko; Sutanudjaja, Edwin; Renzullo, Luigi; Sterk, Geert; Schellekens, Jaap; Bierkens, Marc

    2015-04-01

    The coarse spatial resolution of global hydrological models (typically > 0.25o) often limits their ability to resolve key water balance processes for many river basins and thus compromises their suitability for water resources management, especially when compared to locally-tunes river models. A possible solution to the problem may be to drive the coarse resolution models with high-resolution meteorological data as well as to assimilate ground-based and remotely-sensed observations of key water cycle variables. While this would improve the modelling resolution of the global model, the impact of prediction accuracy remains largely an open question. In this study we investigated the impact that assimilating streamflow and satellite soil moisture observations have on global hydrological model estimation, driven by coarse- and high-resolution meteorological observations, for the Murrumbidgee river basin in Australia. The PCR-GLOBWB global hydrological model is forced with downscaled global climatological data (from 0.5o downscaled to 0.1o resolution) obtained from the WATCH Forcing Data (WFDEI) and local high resolution gauging station based gridded datasets (0.05o), sourced from the Australian Bureau of Meteorology. Downscaled satellite derived soil moisture (from 0.5o downscaled to 0.1o resolution) from AMSR-E and streamflow observations collected from 25 gauging stations are assimilated using an ensemble Kalman filter. Several scenarios are analysed to explore the added value of data assimilation considering both local and global climatological data. Results show that the assimilation of streamflow observations result in the largest improvement of the model estimates. The joint assimilation of both streamflow and downscaled soil moisture observations leads to further improved in streamflow simulations (10% reduction in RMSE), mainly in the headwater catchments (up to 10,000 km2). Results also show that the added contribution of data assimilation, for both soil

  9. 2D materials advances: from large scale synthesis and controlled heterostructures to improved characterization techniques, defects and applications

    SciTech Connect

    Lin, Zhong; McCreary, Amber; Briggs, Natalie; Subramanian, Shruti; Zhang, Kehao; Sun, Yifan; Li, Xufan; Borys, Nicholas J.; Yuan, Hongtao; Fullerton-Shirey, Susan K.; Chernikov, Alexey; Zhao, Hui; McDonnell, Stephen; Lindenberg, Aaron M.; Xiao, Kai; LeRoy, Brian J.; Drndić, Marija; Hwang, James C. M.; Park, Jiwoong; Chhowalla, Manish; Schaak, Raymond E.; Javey, Ali; Hersam, Mark C.; Robinson, Joshua; Terrones, Mauricio

    2016-12-08

    The rise of two-dimensional (2D) materials research took place following the isolation of graphene in 2004. These new 2D materials include transition metal dichalcogenides, mono-elemental 2D sheets, and several carbide- and nitride-based materials. The number of publications related to these emerging materials has been drastically increasing over the last five years. Thus, through this comprehensive review, we aim to discuss the most recent groundbreaking discoveries as well as emerging opportunities and remaining challenges. This review starts out by delving into the improved methods of producing these new 2D materials via controlled exfoliation, metal organic chemical vapor deposition, and wet chemical means. Here we look into recent studies of doping as well as the optical properties of 2D materials and their heterostructures. Recent advances towards applications of these materials in 2D electronics are also reviewed, and include the tunnel MOSFET and ways to reduce the contact resistance for fabricating high-quality devices. Finally, several unique and innovative applications recently explored are discussed as well as perspectives of this exciting and fast moving field.

  10. 2D materials advances: from large scale synthesis and controlled heterostructures to improved characterization techniques, defects and applications

    DOE PAGES

    Lin, Zhong; McCreary, Amber; Briggs, Natalie; ...

    2016-12-08

    The rise of two-dimensional (2D) materials research took place following the isolation of graphene in 2004. These new 2D materials include transition metal dichalcogenides, mono-elemental 2D sheets, and several carbide- and nitride-based materials. The number of publications related to these emerging materials has been drastically increasing over the last five years. Thus, through this comprehensive review, we aim to discuss the most recent groundbreaking discoveries as well as emerging opportunities and remaining challenges. This review starts out by delving into the improved methods of producing these new 2D materials via controlled exfoliation, metal organic chemical vapor deposition, and wet chemicalmore » means. Here we look into recent studies of doping as well as the optical properties of 2D materials and their heterostructures. Recent advances towards applications of these materials in 2D electronics are also reviewed, and include the tunnel MOSFET and ways to reduce the contact resistance for fabricating high-quality devices. Finally, several unique and innovative applications recently explored are discussed as well as perspectives of this exciting and fast moving field.« less

  11. Large-scale binding ligand prediction by improved patch-based method Patch-Surfer2.0

    PubMed Central

    Zhu, Xiaolei; Xiong, Yi; Kihara, Daisuke

    2015-01-01

    Motivation: Ligand binding is a key aspect of the function of many proteins. Thus, binding ligand prediction provides important insight in understanding the biological function of proteins. Binding ligand prediction is also useful for drug design and examining potential drug side effects. Results: We present a computational method named Patch-Surfer2.0, which predicts binding ligands for a protein pocket. By representing and comparing pockets at the level of small local surface patches that characterize physicochemical properties of the local regions, the method can identify binding pockets of the same ligand even if they do not share globally similar shapes. Properties of local patches are represented by an efficient mathematical representation, 3D Zernike Descriptor. Patch-Surfer2.0 has significant technical improvements over our previous prototype, which includes a new feature that captures approximate patch position with a geodesic distance histogram. Moreover, we constructed a large comprehensive database of ligand binding pockets that will be searched against by a query. The benchmark shows better performance of Patch-Surfer2.0 over existing methods. Availability and implementation: http://kiharalab.org/patchsurfer2.0/ Contact: dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25359888

  12. 2D materials advances: from large scale synthesis and controlled heterostructures to improved characterization techniques, defects and applications

    NASA Astrophysics Data System (ADS)

    Lin, Zhong; McCreary, Amber; Briggs, Natalie; Subramanian, Shruti; Zhang, Kehao; Sun, Yifan; Li, Xufan; Borys, Nicholas J.; Yuan, Hongtao; Fullerton-Shirey, Susan K.; Chernikov, Alexey; Zhao, Hui; McDonnell, Stephen; Lindenberg, Aaron M.; Xiao, Kai; LeRoy, Brian J.; Drndić, Marija; Hwang, James C. M.; Park, Jiwoong; Chhowalla, Manish; Schaak, Raymond E.; Javey, Ali; Hersam, Mark C.; Robinson, Joshua; Terrones, Mauricio

    2016-12-01

    The rise of two-dimensional (2D) materials research took place following the isolation of graphene in 2004. These new 2D materials include transition metal dichalcogenides, mono-elemental 2D sheets, and several carbide- and nitride-based materials. The number of publications related to these emerging materials has been drastically increasing over the last five years. Thus, through this comprehensive review, we aim to discuss the most recent groundbreaking discoveries as well as emerging opportunities and remaining challenges. This review starts out by delving into the improved methods of producing these new 2D materials via controlled exfoliation, metal organic chemical vapor deposition, and wet chemical means. We look into recent studies of doping as well as the optical properties of 2D materials and their heterostructures. Recent advances towards applications of these materials in 2D electronics are also reviewed, and include the tunnel MOSFET and ways to reduce the contact resistance for fabricating high-quality devices. Finally, several unique and innovative applications recently explored are discussed as well as perspectives of this exciting and fast moving field.

  13. Large-scale Manufacturing of Nanoparticulate-based Lubrication Additives for Improved Energy Efficiency and Reduced Emissions

    SciTech Connect

    Erdemir, Ali

    2013-09-26

    This project was funded under the Department of Energy (DOE) Lab Call on Nanomanufacturing for Energy Efficiency and was directed toward the development of novel boron-based nanocolloidal lubrication additives for improving the friction and wear performance of machine components in a wide range of industrial and transportation applications. Argonne's research team concentrated on the scientific and technical aspects of the project, using a range of state-of-the art analytical and tribological test facilities. Argonne has extensive past experience and expertise in working with boron-based solid and liquid lubrication additives, and has intellectual property ownership of several. There were two industrial collaborators in this project: Ashland Oil (represented by its Valvoline subsidiary) and Primet Precision Materials, Inc. (a leading nanomaterials company). There was also a sub-contract with the University of Arkansas. The major objectives of the project were to develop novel boron-based nanocolloidal lubrication additives and to optimize and verify their performance under boundary-lubricated sliding conditions. The project also tackled problems related to colloidal dispersion, larger-scale manufacturing and blending of nano-additives with base carrier oils. Other important issues dealt with in the project were determination of the optimum size and concentration of the particles and compatibility with various base fluids and/or additives. Boron-based particulate additives considered in this project included boric acid (H{sub 3}BO{sub 3}), hexagonal boron nitride (h-BN), boron oxide, and borax. As part of this project, we also explored a hybrid MoS{sub 2} + boric acid formulation approach for more effective lubrication and reported the results. The major motivation behind this work was to reduce energy losses related to friction and wear in a wide spectrum of mechanical systems and thereby reduce our dependence on imported oil. Growing concern over greenhouse gas

  14. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence

    PubMed Central

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-01-01

    Motivation: Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. Methods: We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using ‘learning to rank’. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. Results: MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. Availability and implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn PMID:26072501

  15. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence.

    PubMed

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-06-15

    Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using 'learning to rank'. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. The software is available upon request. © The Author 2015. Published by Oxford University Press.

  16. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    NASA Astrophysics Data System (ADS)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  17. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  18. Conformational and Thermal Stability Improvements for the Large-Scale Production of Yeast-Derived Rabbit Hemorrhagic Disease Virus-Like Particles as Multipurpose Vaccine

    PubMed Central

    Méndez, Lídice; González, Nemecio; Parra, Francisco; Martín-Alonso, José M.; Limonta, Miladys; Sánchez, Kosara; Cabrales, Ania; Estrada, Mario P.; Rodríguez-Mallón, Alina; Farnós, Omar

    2013-01-01

    Recombinant virus-like particles (VLP) antigenically similar to rabbit hemorrhagic disease virus (RHDV) were recently expressed at high levels inside Pichia pastoris cells. Based on the potential of RHDV VLP as platform for diverse vaccination purposes we undertook the design, development and scale-up of a production process. Conformational and stability issues were addressed to improve process control and optimization. Analyses on the structure, morphology and antigenicity of these multimers were carried out at different pH values during cell disruption and purification by size-exclusion chromatography. Process steps and environmental stresses in which aggregation or conformational instability can be detected were included. These analyses revealed higher stability and recoveries of properly assembled high-purity capsids at acidic and neutral pH in phosphate buffer. The use of stabilizers during long-term storage in solution showed that sucrose, sorbitol, trehalose and glycerol acted as useful aggregation-reducing agents. The VLP emulsified in an oil-based adjuvant were subjected to accelerated thermal stress treatments. None to slight variations were detected in the stability of formulations and in the structure of recovered capsids. A comprehensive analysis on scale-up strategies was accomplished and a nine steps large-scale production process was established. VLP produced after chromatographic separation protected rabbits against a lethal challenge. The minimum protective dose was identified. Stabilized particles were ultimately assayed as carriers of a foreign viral epitope from another pathogen affecting a larger animal species. For that purpose, a linear protective B-cell epitope from Classical Swine Fever Virus (CSFV) E2 envelope protein was chemically coupled to RHDV VLP. Conjugates were able to present the E2 peptide fragment for immune recognition and significantly enhanced the peptide-specific antibody response in vaccinated pigs. Overall these results

  19. The transition of a large-scale quality improvement initiative: a bibliometric analysis of the Productive Ward: Releasing Time to Care programme.

    PubMed

    White, Mark; Wells, John S G; Butterworth, Tony

    2014-09-01

    To examine the literature related to a large-scale quality improvement initiative, the 'Productive Ward: Releasing Time to Care', providing a bibliometric profile that tracks the level of interest and scale of roll-out and adoption, discussing the implications for sustainability. Productive Ward: Releasing Time to Care (aka Productive Ward) is probably one of the most ambitious quality improvement efforts engaged by the UK-NHS. Politically and financially supported, its main driver was the NHS Institute for Innovation and Improvement. The NHS institute closed in early 2013 leaving a void of resources, knowledge and expertise. UK roll-out of the initiative is well established and has arguably peaked. International interest in the initiative however continues to develop. A comprehensive literature review was undertaken to identify the literature related to the Productive Ward and its implementation (January 2006-June 2013). A bibliometric analysis examined/reviewed the trends and identified/measured interest, spread and uptake. Overall distribution patterns identify a declining trend of interest, with reduced numbers of grey literature and evaluation publications. However, detailed examination of the data shows no reduction in peer-reviewed outputs. There is some evidence that international uptake of the initiative continues to generate publications and create interest. Sustaining this initiative in the UK will require re-energising, a new focus and financing. The transition period created by the closure of its creator may well contribute to further reduced levels of interest and publication outputs in the UK. However, international implementation, evaluation and associated publications could serve to attract professional/academic interest in this well-established, positively reported, quality improvement initiative. This paper provides nurses and ward teams involved in quality improvement programmes with a detailed, current-state, examination and analysis of the

  20. Large-scale combining signals from both biomedical literature and the FDA Adverse Event Reporting System (FAERS) to improve post-marketing drug safety signal detection

    PubMed Central

    2014-01-01

    Background Independent data sources can be used to augment post-marketing drug safety signal detection. The vast amount of publicly available biomedical literature contains rich side effect information for drugs at all clinical stages. In this study, we present a large-scale signal boosting approach that combines over 4 million records in the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) and over 21 million biomedical articles. Results The datasets are comprised of 4,285,097 records from FAERS and 21,354,075 MEDLINE articles. We first extracted all drug-side effect (SE) pairs from FAERS. Our study implemented a total of seven signal ranking algorithms. We then compared these different ranking algorithms before and after they were boosted with signals from MEDLINE sentences or abstracts. Finally, we manually curated all drug-cardiovascular (CV) pairs that appeared in both data sources and investigated whether our approach can detect many true signals that have not been included in FDA drug labels. We extracted a total of 2,787,797 drug-SE pairs from FAERS with a low initial precision of 0.025. The ranking algorithm combined signals from both FAERS and MEDLINE, significantly improving the precision from 0.025 to 0.371 for top-ranked pairs, representing a 13.8 fold elevation in precision. We showed by manual curation that drug-SE pairs that appeared in both data sources were highly enriched with true signals, many of which have not yet been included in FDA drug labels. Conclusions We have developed an efficient and effective drug safety signal ranking and strengthening approach We demonstrate that large-scale combining information from FAERS and biomedical literature can significantly contribute to drug safety surveillance. PMID:24428898

  1. Large-scale combining signals from both biomedical literature and the FDA Adverse Event Reporting System (FAERS) to improve post-marketing drug safety signal detection.

    PubMed

    Xu, Rong; Wang, QuanQiu

    2014-01-15

    Independent data sources can be used to augment post-marketing drug safety signal detection. The vast amount of publicly available biomedical literature contains rich side effect information for drugs at all clinical stages. In this study, we present a large-scale signal boosting approach that combines over 4 million records in the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) and over 21 million biomedical articles. The datasets are comprised of 4,285,097 records from FAERS and 21,354,075 MEDLINE articles. We first extracted all drug-side effect (SE) pairs from FAERS. Our study implemented a total of seven signal ranking algorithms. We then compared these different ranking algorithms before and after they were boosted with signals from MEDLINE sentences or abstracts. Finally, we manually curated all drug-cardiovascular (CV) pairs that appeared in both data sources and investigated whether our approach can detect many true signals that have not been included in FDA drug labels. We extracted a total of 2,787,797 drug-SE pairs from FAERS with a low initial precision of 0.025. The ranking algorithm combined signals from both FAERS and MEDLINE, significantly improving the precision from 0.025 to 0.371 for top-ranked pairs, representing a 13.8 fold elevation in precision. We showed by manual curation that drug-SE pairs that appeared in both data sources were highly enriched with true signals, many of which have not yet been included in FDA drug labels. We have developed an efficient and effective drug safety signal ranking and strengthening approach We demonstrate that large-scale combining information from FAERS and biomedical literature can significantly contribute to drug safety surveillance.

  2. Re-annotation, improved large-scale assembly and establishment of a catalogue of noncoding loci for the genome of the model brown alga Ectocarpus.

    PubMed

    Cormier, Alexandre; Avia, Komlan; Sterck, Lieven; Derrien, Thomas; Wucher, Valentin; Andres, Gwendoline; Monsoor, Misharl; Godfroy, Olivier; Lipinska, Agnieszka; Perrineau, Marie-Mathilde; Van De Peer, Yves; Hitte, Christophe; Corre, Erwan; Coelho, Susana M; Cock, J Mark

    2017-04-01

    The genome of the filamentous brown alga Ectocarpus was the first to be completely sequenced from within the brown algal group and has served as a key reference genome both for this lineage and for the stramenopiles. We present a complete structural and functional reannotation of the Ectocarpus genome. The large-scale assembly of the Ectocarpus genome was significantly improved and genome-wide gene re-annotation using extensive RNA-seq data improved the structure of 11 108 existing protein-coding genes and added 2030 new loci. A genome-wide analysis of splicing isoforms identified an average of 1.6 transcripts per locus. A large number of previously undescribed noncoding genes were identified and annotated, including 717 loci that produce long noncoding RNAs. Conservation of lncRNAs between Ectocarpus and another brown alga, the kelp Saccharina japonica, suggests that at least a proportion of these loci serve a function. Finally, a large collection of single nucleotide polymorphism-based markers was developed for genetic analyses. These resources are available through an updated and improved genome database. This study significantly improves the utility of the Ectocarpus genome as a high-quality reference for the study of many important aspects of brown algal biology and as a reference for genomic analyses across the stramenopiles. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  3. Evaluating a Large-Scale Community-Based Intervention to Improve Pregnancy and Newborn Health Among the Rural Poor in India

    PubMed Central

    Lalwani, Tanya; Dutta, Rahul; Rajaratnam, Julie Knoll; Ruducha, Jenny; Varkey, Leila Caleb; Wunnava, Sita; Menezes, Lysander; Taylor, Catharine; Bernson, Jeff

    2015-01-01

    Objectives. We evaluated the effectiveness of the Sure Start project, which was implemented in 7 districts of Uttar Pradesh, India, to improve maternal and newborn health. Methods. Interventions were implemented at 2 randomly assigned levels of intensity. Forty percent of the areas received a more intense intervention, including community-level meetings with expectant mothers. A baseline survey consisted of 12 000 women who completed pregnancy in 2007; a follow-up survey was conducted for women in 2010 in the same villages. Our quantitative analyses provide an account of the project’s impact. Results. We observed significant health improvements in both intervention areas over time; in the more intensive intervention areas, we found greater improvements in care-seeking and healthy behaviors. The more intensive intervention areas did not experience a significantly greater decline in neonatal mortality. Conclusions. This study demonstrates that community-based efforts, especially mothers’ group meetings designed to increase care-seeking and healthy behaviors, are effective and can be implemented at large scale. PMID:25393175

  4. Health risks from large-scale water pollution: Current trends and implications for improving drinking water quality in the lower Amu Darya drainage basin, Uzbekistan

    NASA Astrophysics Data System (ADS)

    Törnqvist, Rebecka; Jarsjö, Jerker

    2010-05-01

    Safe drinking water is a primary prerequisite to human health, well being and development. Yet, there are roughly one billion people around the world that lack access to safe drinking water supply. Health risk assessments are effective for evaluating the suitability of using various water sources as drinking water supply. Additionally, knowledge of pollutant transport processes on relatively large scales is needed to identify effective management strategies for improving water resources of poor quality. The lower Amu Darya drainage basin close to the Aral Sea in Uzbekistan suffers from physical water scarcity and poor water quality. This is mainly due to the intensive agriculture production in the region, which requires extensive freshwater withdrawals and use of fertilizers and pesticides. In addition, recurrent droughts in the region affect the surface water availability. On average 20% of the population in rural areas in Uzbekistan lack access to improved drinking water sources, and the situation is even more severe in the lower Amu Darya basin. In this study, we consider health risks related to water-borne contaminants by dividing measured substance concentrations with health-risk based guideline values from the World Health Organisation (WHO). In particular, we analyse novel results of water quality measurements performed in 2007 and 2008 in the Mejdurechye Reservoir (located in the downstream part of the Amu Darya river basin). We furthermore identify large-scale trends by comparing the Mejdurechye results to reported water quality results from a considerable stretch of the Amu Darya river basin, including drainage water, river water and groundwater. The results show that concentrations of cadmium and nitrite exceed the WHO health-risk based guideline values in Mejdurechye Reservoir. Furthermore, concentrations of the since long ago banned and highly toxic pesticides dichlorodiphenyltrichloroethane (DDT) and γ-hexachlorocyclohexane (γ-HCH) were detected in

  5. SpotLight Proteomics: uncovering the hidden blood proteome improves diagnostic power of proteomics

    PubMed Central

    Lundström, Susanna L.; Zhang, Bo; Rutishauser, Dorothea; Aarsland, Dag; Zubarev, Roman A.

    2017-01-01

    The human blood proteome is frequently assessed by protein abundance profiling using a combination of liquid chromatography and tandem mass spectrometry (LC-MS/MS). In traditional sequence database search, many good-quality MS/MS data remain unassigned. Here we uncover the hidden part of the blood proteome via novel SpotLight approach. This method combines de novo MS/MS sequencing of enriched antibodies and co-extracted proteins with subsequent label-free quantification of new and known peptides in both enriched and unfractionated samples. In a pilot study on differentiating early stages of Alzheimer’s disease (AD) from Dementia with Lewy Bodies (DLB), on peptide level the hidden proteome contributed almost as much information to patient stratification as the apparent proteome. Intriguingly, many of the new peptide sequences are attributable to antibody variable regions, and are potentially indicative of disease etiology. When the hidden and apparent proteomes are combined, the accuracy of differentiating AD (n = 97) and DLB (n = 47) increased from ≈85% to ≈95%. The low added burden of SpotLight proteome analysis makes it attractive for use in clinical settings. PMID:28167817

  6. A Large-Scale Screen for Artificial Selection in Maize Identifies Candidate Agronomic Loci for Domestication and Crop ImprovementW⃞

    PubMed Central

    Yamasaki, Masanori; Tenaillon, Maud I.; Vroh Bi, Irie; Schroeder, Steve G.; Sanchez-Villeda, Hector; Doebley, John F.; Gaut, Brandon S.; McMullen, Michael D.

    2005-01-01

    Maize (Zea mays subsp mays) was domesticated from teosinte (Z. mays subsp parviglumis) through a single domestication event in southern Mexico between 6000 and 9000 years ago. This domestication event resulted in the original maize landrace varieties, which were spread throughout the Americas by Native Americans and adapted to a wide range of environmental conditions. Starting with landraces, 20th century plant breeders selected inbred lines of maize for use in hybrid maize production. Both domestication and crop improvement involved selection of specific alleles at genes controlling key morphological and agronomic traits, resulting in reduced genetic diversity relative to unselected genes. Here, we sequenced 1095 maize genes from a sample of 14 inbred lines and chose 35 genes with zero sequence diversity as potential targets of selection. These 35 genes were then sequenced in a sample of diverse maize landraces and teosintes and tested for selection. Using two statistical tests, we identified eight candidate genes. Extended gene sequencing of these eight candidate loci confirmed that six were selected throughout the gene, and the remaining two exhibited evidence of selection in the 3′ portion of each gene. The selected genes have functions consistent with agronomic selection for nutritional quality, maturity, and productivity. Our large-scale screen for artificial selection allows identification of genes of potential agronomic importance even when gene function and the phenotype of interest are unknown. PMID:16227451

  7. Halobacterium salinarum NRC-1 PeptideAtlas: toward strategies for targeted proteomics and improved proteome coverage.

    PubMed

    Van, Phu T; Schmid, Amy K; King, Nichole L; Kaur, Amardeep; Pan, Min; Whitehead, Kenia; Koide, Tie; Facciotti, Marc T; Goo, Young Ah; Deutsch, Eric W; Reiss, David J; Mallick, Parag; Baliga, Nitin S

    2008-09-01

    The relatively small numbers of proteins and fewer possible post-translational modifications in microbes provide a unique opportunity to comprehensively characterize their dynamic proteomes. We have constructed a PeptideAtlas (PA) covering 62.7% of the predicted proteome of the extremely halophilic archaeon Halobacterium salinarum NRC-1 by compiling approximately 636 000 tandem mass spectra from 497 mass spectrometry runs in 88 experiments. Analysis of the PA with respect to biophysical properties of constituent peptides, functional properties of parent proteins of detected peptides, and performance of different mass spectrometry approaches has highlighted plausible strategies for improving proteome coverage and selecting signature peptides for targeted proteomics. Notably, discovery of a significant correlation between absolute abundances of mRNAs and proteins has helped identify low abundance of proteins as the major limitation in peptide detection. Furthermore, we have discovered that iTRAQ labeling for quantitative proteomic analysis introduces a significant bias in peptide detection by mass spectrometry. Therefore, despite identifying at least one proteotypic peptide for almost all proteins in the PA, a context-dependent selection of proteotypic peptides appears to be the most effective approach for targeted proteomics.

  8. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  9. Exposure to Large-Scale Social and Behavior Change Communication Interventions Is Associated with Improvements in Infant and Young Child Feeding Practices in Ethiopia

    PubMed Central

    Rawat, Rahul; Mwangi, Edina M.; Tesfaye, Roman; Abebe, Yewelsew; Baker, Jean; Frongillo, Edward A.; Ruel, Marie T.; Menon, Purnima

    2016-01-01

    Optimal breastfeeding (BF) practices in Ethiopia are far below the government’s targets, and complementary feeding practices are poor. The Alive & Thrive initiative aimed to improve infant and young child feeding (IYCF) practices through large-scale implementation of social and behavior change communication interventions in four regions of Ethiopia. The study assessed the effects of the interventions on IYCF practices and anthropometry over time in two regions–Southern Nations, Nationalities and Peoples Region and Tigray. A pre- and post-intervention adequacy evaluation design was used; repeated cross-sectional surveys of households with children aged 0–23.9 mo (n = 1481 and n = 1494) and with children aged 24–59.9 mo (n = 1481 and n = 1475) were conducted at baseline (2010) and endline (2014), respectively. Differences in outcomes over time were estimated using regression models, accounting for clustering and covariates. Plausibility analyses included tracing recall of key messages and promoted foods and dose-response analyses. We observed improvements in most WHO-recommended IYCF indicators. Early BF initiation and exclusive BF increased by 13.7 and 9.4 percentage points (pp), respectively. Differences for timely introduction of complementary foods, minimum dietary diversity (MDD), minimum meal frequency (MMF), minimum acceptable diet (MAD), and consumption of iron-rich foods were 22.2, 3.3, 26.2, 3.5, and 2.7 pp, respectively. Timely introduction and intake of foods promoted by the interventions improved significantly, but anthropometric outcomes did not. We also observed a dose-response association between health post visits and early initiation of BF (OR: 1.8); higher numbers of home visits by community volunteers and key messages recalled were associated with 1.8–4.4 times greater odds of achieving MDD, MMF, and MAD, and higher numbers of radio spots heard were associated with 3 times greater odds of achieving MDD and MAD. The interventions were

  10. Immunological metagene signatures derived from immunogenic cancer cell death associate with improved survival of patients with lung, breast or ovarian malignancies: A large-scale meta-analysis

    PubMed Central

    Garg, Abhishek D.; De Ruysscher, Dirk; Agostinis, Patrizia

    2016-01-01

    ABSTRACT The emerging role of the cancer cell-immune cell interface in shaping tumorigenesis/anticancer immunotherapy has increased the need to identify prognostic biomarkers. Henceforth, our primary aim was to identify the immunogenic cell death (ICD)-derived metagene signatures in breast, lung and ovarian cancer that associate with improved patient survival. To this end, we analyzed the prognostic impact of differential gene-expression of 33 pre-clinically-validated ICD-parameters through a large-scale meta-analysis involving 3,983 patients (‘discovery’ dataset) across lung (1,432), breast (1,115) and ovarian (1,436) malignancies. The main results were also substantiated in ‘validation’ datasets consisting of 818 patients of same cancer-types (i.e. 285 breast/274 lung/259 ovarian). The ICD-associated parameters exhibited a highly-clustered and largely cancer type-specific prognostic impact. Interestingly, we delineated ICD-derived consensus-metagene signatures that exhibited a positive prognostic impact that was either cancer type-independent or specific. Importantly, most of these ICD-derived consensus-metagenes (acted as attractor-metagenes and thereby) ‘attracted’ highly co-expressing sets of genes or convergent-metagenes. These convergent-metagenes also exhibited positive prognostic impact in respective cancer types. Remarkably, we found that the cancer type-independent consensus-metagene acted as an ‘attractor’ for cancer-specific convergent-metagenes. This reaffirms that the immunological prognostic landscape of cancer tends to segregate between cancer-independent and cancer-type specific gene signatures. Moreover, this prognostic landscape was largely dominated by the classical T cell activity/infiltration/function-related biomarkers. Interestingly, each cancer type tended to associate with biomarkers representing a specific T cell activity or function rather than pan-T cell biomarkers. Thus, our analysis confirms that ICD can serve as a

  11. The impact of a large-scale quality improvement programme on work engagement: preliminary results from a national cross-sectional-survey of the 'Productive Ward'.

    PubMed

    White, Mark; Wells, John S G; Butterworth, Tony

    2014-12-01

    Quality improvement (QI) Programmes, like the Productive Ward: Releasing-time-to-care initiative, aim to 'engage' and 'empower' ward teams to actively participate, innovate and lead quality improvement at the front line. However, little is known about the relationship and impact that QI work has on the 'engagement' of the clinical teams who participate and vice-versa. This paper explores and examines the impact of a large-scale QI programme, the Productive Ward, on the 'work engagement' of the nurses and ward teams involved. Using the Utrecht Work Engagement Scale (UWES), we surveyed, measured and analysed work engagement in a representative test group of hospital-based ward teams who had recently commenced the latest phase of the national 'Productive Ward' initiative in Ireland and compared them to a control group of similar size and matched (as far as is possible) on variables such as ward size, employment grade and clinical specialty area. 338 individual datasets were recorded, n=180 (53.6%) from the Productive Ward group, and n=158 (46.4%) from the control group; the overall response rate was 67%, and did not differ significantly between the Productive Ward and control groups. The work engagement mean score (±standard deviation) in the Productive group was 4.33(±0.88), and 4.07(±1.06) in the control group, representing a modest but statistically significant between-group difference (p=0.013, independent samples t-test). Similarly modest differences were observed in all three dimensions of the work engagement construct. Employment grade and the clinical specialty area were also significantly related to the work engagement score (p<0.001, general linear model) and (for the most part), to its components, with both clerical and nurse manager grades, and the elderly specialist areas, exhibiting substantially higher scores. The findings demonstrate how QI activities, like those integral to the Productive Ward programme, appear to positively impact on the work

  12. Working with Secondary School Leadership in a Large-Scale Reform in London, UK: Consultants' Perspectives of Their Role as Agents of School Change and Improvement

    ERIC Educational Resources Information Center

    Cameron, David Hagen

    2010-01-01

    This article uses a cultural and political theoretical framework to examine the relationship between consultants and secondary school leaders within a large-scale consultancy-based reform, the Secondary National Strategy (SNS), in London UK. The SNS follows a cascade model of implementation, in which nationally created initiatives are introduced…

  13. Working with Secondary School Leadership in a Large-Scale Reform in London, UK: Consultants' Perspectives of Their Role as Agents of School Change and Improvement

    ERIC Educational Resources Information Center

    Cameron, David Hagen

    2010-01-01

    This article uses a cultural and political theoretical framework to examine the relationship between consultants and secondary school leaders within a large-scale consultancy-based reform, the Secondary National Strategy (SNS), in London UK. The SNS follows a cascade model of implementation, in which nationally created initiatives are introduced…

  14. Improved wave transformation in a large-scale coastline model to explore the role of wave climate change in driving coastal erosion

    NASA Astrophysics Data System (ADS)

    Whitley, A. E.; McNamara, D.

    2013-12-01

    According to the 2010 U.S. Census, over one third of the United States population lives near the eastern coastline. With such a significant investment in human agency along the coast, it is critical to understand how large-scale coastal morphology will evolve in the coming decades in response to rising sea level and changing storm climates. Previous work has shown that potential changes in wave climate can give rise to a larger coastal erosion signal than that expected due to sea level rise alone. This work utilized a large-scale coastal change model that simulated deep-water wave transformation assuming bathymetric contours were parallel to the shoreline and the model did not incorporate wave crest convergence or divergence. Linear stability analyses that have been performed on large-scale coastline evolution that do not assume parallel bathymetric contours and account for wave converge and divergence were found to be sensitive to the offshore extent of shore parallel contours. This study incorporates wave ray tracing into an existing coastline change model to explore finite amplitude development and evolution of large-scale coastal morphology. We will present results that explore the relative contributions of wave climate change and sea level rise to coastal erosion.

  15. Improved wave transformation in a large-scale coastline model to explore the role of wave climate change in driving coastal erosion

    NASA Astrophysics Data System (ADS)

    Whitley, A. E.; McNamara, D.; Murray, A.

    2012-12-01

    According to the 2010 U.S. Census, over one third of the United States population lives near the eastern coastline. With such a significant investment in human agency along the coast, it is critical to understand how large-scale coastal morphology will evolve in the coming decades in response to rising sea level and changing storm climates. Previous work has shown that potential changes in wave climate can give rise to a larger coastal erosion signal than that expected due to sea level rise alone. This work utilized a large-scale coastal change model that simulated deep-water wave transformation assuming bathymetric contours were parallel to the shoreline and the model did not incorporate wave crest convergence or divergence. Linear stability analyses that have been performed on large-scale coastline evolution that do not assume parallel bathymetric contours and account for wave converge and divergence were found to be sensitive to the offshore extent of shore parallel contours. This study incorporates wave ray tracing into an existing coastline change model to explore finite amplitude development and evolution of large-scale coastal morphology. We will present results that explore the relative contributions of wave climate change and sea level rise to coastal erosion.

  16. The Value of Large-Scale Randomised Control Trials in System-Wide Improvement: The Case of the Reading Catch-Up Programme

    ERIC Educational Resources Information Center

    Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo

    2017-01-01

    This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…

  17. The Use of Qualitative Methods in Large-Scale Evaluation: Improving the Quality of the Evaluation and the Meaningfulness of the Findings

    ERIC Educational Resources Information Center

    Slayton, Julie; Llosa, Lorena

    2005-01-01

    In light of the current debate over the meaning of "scientifically based research", we argue that qualitative methods should be an essential part of large-scale program evaluations if program effectiveness is to be determined and understood. This article chronicles the challenges involved in incorporating qualitative methods into the large-scale…

  18. Improved parameterization of marine ice dynamics and flow instabilities for simulation of the Austfonna ice cap using a large-scale ice sheet model

    NASA Astrophysics Data System (ADS)

    Dunse, T.; Greve, R.; Schuler, T.; Hagen, J. M.; Navarro, F.; Vasilenko, E.; Reijmer, C.

    2009-12-01

    The Austfonna ice cap covers an area of 8120 km2 and is by far the largest glacier on Svalbard. Almost 30% of the entire area is grounded below sea-level, while the figure is as large as 57% for the known surge-type basins in particular. Marine ice dynamics, as well as flow instabilities presumably control flow regime, form and evolution of Austfonna. These issues are our focus in numerical simulations of the ice cap. We employ the thermodynamic, large-scale ice sheet model SICOPOLIS (http://sicopolis.greveweb.net/) which is based on the shallow-ice approximation. We present improved parameterizations of (a) the marine extent and calving and (b) processes that may initiate flow instabilities such as switches from cold to temperate basal conditions, surface steepening and hence, increases in driving stress, enhanced sliding or deformation of unconsolidated marine sediments and diminishing ice thicknesses towards flotation thickness. Space-borne interferometric snapshots of Austfonna revealed a velocity structure of a slow moving polar ice cap (< 10m/a) interrupted by distinct fast flow units with velocities in excess of 100m/a. However, observations of flow variability are scarce. In spring 2008, we established a series of stakes along the centrelines of two fast-flowing units. Repeated DGPS and continuous GPS measurements of the stake positions give insight in the temporal flow variability of these units and provide constrains to the modeled surface velocity field. Austfonna’s thermal structure is described as polythermal. However, direct measurements of the temperature distribution is available only from one single borehole at the summit area. The vertical temperature profile shows that the bulk of the 567m thick ice column is cold, only underlain by a thin temperate basal layer of approximately 20m. To acquire a spatially extended picture of the thermal structure (and bed topography), we used low-frequency (20 MHz) GPR profiling across the ice cap and the

  19. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  20. Digestion and depletion of abundant proteins improves proteomic coverage

    PubMed Central

    Fonslow, Bryan R.; Stein, Benjamin D.; Webb, Kristofor J.; Xu, Tao; Choi, Jeong; Park, Sung Kyu; Yates, John R.

    2012-01-01

    Two major challenges in proteomics are the large number of proteins and their broad dynamic range within the cell. We exploited the abundance-dependent Michaelis-Menten kinetics of trypsin digestion to selectively digest and deplete abundant proteins with a method we call DigDeAPr. We validated the depletion mechanism with known yeast protein abundances and observed greater than 3-fold improvement in low abundance human protein identification and quantitation metrics. This methodology should be broadly applicable to many organisms, proteases, and proteomic pipelines. PMID:23160281

  1. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  2. Large-scale circuit simulation

    NASA Astrophysics Data System (ADS)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  3. Large Scale Dynamos in Stars

    NASA Astrophysics Data System (ADS)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  4. Methane emissions on large scales

    NASA Astrophysics Data System (ADS)

    Beswick, K. M.; Simpson, T. W.; Fowler, D.; Choularton, T. W.; Gallagher, M. W.; Hargreaves, K. J.; Sutton, M. A.; Kaye, A.

    with previous results from the area, indicating that this method of data analysis provided good estimates of large scale methane emissions.

  5. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-06

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling

  6. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model

  7. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  8. Galaxy clustering on large scales.

    PubMed Central

    Efstathiou, G

    1993-01-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  9. Improving HIV proteome annotation: new features of BioAfrica HIV Proteomics Resource

    PubMed Central

    Druce, Megan; Hulo, Chantal; Masson, Patrick; Sommer, Paula; Xenarios, Ioannis; Le Mercier, Philippe; De Oliveira, Tulio

    2016-01-01

    The Human Immunodeficiency Virus (HIV) is one of the pathogens that cause the greatest global concern, with approximately 35 million people currently infected with HIV. Extensive HIV research has been performed, generating a large amount of HIV and host genomic data. However, no effective vaccine that protects the host from HIV infection is available and HIV is still spreading at an alarming rate, despite effective antiretroviral (ARV) treatment. In order to develop effective therapies, we need to expand our knowledge of the interaction between HIV and host proteins. In contrast to virus proteins, which often rapidly evolve drug resistance mutations, the host proteins are essentially invariant within all humans. Thus, if we can identify the host proteins needed for virus replication, such as those involved in transporting viral proteins to the cell surface, we have a chance of interrupting viral replication. There is no proteome resource that summarizes this interaction, making research on this subject a difficult enterprise. In order to fill this gap in knowledge, we curated a resource presents detailed annotation on the interaction between the HIV proteome and host proteins. Our resource was produced in collaboration with ViralZone and used manual curation techniques developed by UniProtKB/Swiss-Prot. Our new website also used previous annotations of the BioAfrica HIV-1 Proteome Resource, which has been accessed by approximately 10 000 unique users a year since its inception in 2005. The novel features include a dedicated new page for each HIV protein, a graphic display of its function and a section on its interaction with host proteins. Our new webpages also add information on the genomic location of each HIV protein and the position of ARV drug resistance mutations. Our improved BioAfrica HIV-1 Proteome Resource fills a gap in the current knowledge of biocuration. Database URL: http://www.bioafrica.net/proteomics/HIVproteome.html PMID:27087306

  10. Large Scale Chemical Cross-linking Mass Spectrometry Perspectives

    PubMed Central

    Zybailov, Boris L.; Glazko, Galina V.; Jaiswal, Mihir; Raney, Kevin D.

    2014-01-01

    The spectacular heterogeneity of a complex protein mixture from biological samples becomes even more difficult to tackle when one’s attention is shifted towards different protein complex topologies, transient interactions, or localization of PPIs. Meticulous protein-by-protein affinity pull-downs and yeast-two-hybrid screens are the two approaches currently used to decipher proteome-wide interaction networks. Another method is to employ chemical cross-linking, which gives not only identities of interactors, but could also provide information on the sites of interactions and interaction interfaces. Despite significant advances in mass spectrometry instrumentation over the last decade, mapping Protein-Protein Interactions (PPIs) using chemical cross-linking remains time consuming and requires substantial expertise, even in the simplest of systems. While robust methodologies and software exist for the analysis of binary PPIs and also for the single protein structure refinement using cross-linking-derived constraints, undertaking a proteome-wide cross-linking study is highly complex. Difficulties include i) identifying cross-linkers of the right length and selectivity that could capture interactions of interest; ii) enrichment of the cross-linked species; iii) identification and validation of the cross-linked peptides and cross-linked sites. In this review we examine existing literature aimed at the large-scale protein cross-linking and discuss possible paths for improvement. We also discuss short-length cross-linkers of broad specificity such as formaldehyde and diazirine-based photo-cross-linkers. These cross-linkers could potentially capture many types of interactions, without strict requirement for a particular amino-acid to be present at a given protein-protein interface. How these shortlength, broad specificity cross-linkers be applied to proteome-wide studies? We will suggest specific advances in methodology, instrumentation and software that are needed to

  11. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions.

    PubMed

    Vazquez-Anderson, Jorge; Mihailovic, Mia K; Baldridge, Kevin C; Reyes, Kristofer G; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B; Contreras, Lydia M

    2017-05-19

    Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA-RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA-RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA-mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions

    PubMed Central

    Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.

    2017-01-01

    Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800

  13. 1. The impact of weather forecast improvements on large scale hydrology: analysing a decade of forecasts of the European Flood Alert System

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Thielen, Jutta; Del Medico, Mauro

    2010-05-01

    The European Flood Alert System (EFAS) provides early flood alerts on a pre-operational basis to National hydrological services. EFAS river discharge forecasts are based on probabilistic techniques, using ensemble system and deterministic numerical weather prediction data. The performance of EFAS is regularly analysed with regard to individual flood events and case studies. Although this analysis provides important insight into the strengths and weaknesses of the forecast system, it lacks statistical and independent measures of its long-term performance. In this paper an assessment of EFAS results based on ECMWF weather forecasts over a period of 10 years is presented. EFAS river discharge forecasts have been rerun every week for a period of 10 years using the weather forecast available at the time. These are evaluated for a total of 500 river gauging stations distributed across Europe.. The selected stations are sufficiently separated in space to avoid autocorrelation of station time series. Also, analysis is performed with a gap of 3 days between each forecast which reduces the temporal correlation of the time series of the same station. The data are analysed with regard to skill, bias and quality of river discharge forecast. The 10 year simulations clearly show that the skill of the river discharge forecasts have undergone an evolution linked to the quality of the operational meteorological forecast. Overall, over the period of 10 years, the skill of the EFAS forecasts has steadily increased. Important hydrological extreme events cannot be clearly identified with the skill score analysis, highlighting the necessity for event based analysis in addition to statistical long-term assessments for a better understanding of the EFAS system and large scale river discharge predictions in general. he predictability is shown to depend on catchment size and geographical location.

  14. USING PROTEOMICS TO IMPROVE RISK ASSESSMENT OF HUMAN EXPOSURE TO ENVIRONMENTAL AGENTS

    EPA Science Inventory

    Using Proteomics to Improve Risk Assessment of Human Exposure to Environmental Agents.
    Authors: Witold M. Winnik
    Key Words (4): Proteomics, LC/MS, Western Blots, 1D and 2D gel electrophoresis, toxicity

    The goal of this project is to use proteomics for the character...

  15. USING PROTEOMICS TO IMPROVE RISK ASSESSMENT OF HUMAN EXPOSURE TO ENVIRONMENTAL AGENTS

    EPA Science Inventory

    Using Proteomics to Improve Risk Assessment of Human Exposure to Environmental Agents.
    Authors: Witold M. Winnik
    Key Words (4): Proteomics, LC/MS, Western Blots, 1D and 2D gel electrophoresis, toxicity

    The goal of this project is to use proteomics for the character...

  16. Proteomics and Metabolomics: Two Emerging Areas for Legume Improvement

    PubMed Central

    Ramalingam, Abirami; Kudapa, Himabindu; Pazhamala, Lekha T.; Weckwerth, Wolfram; Varshney, Rajeev K.

    2015-01-01

    The crop legumes such as chickpea, common bean, cowpea, peanut, pigeonpea, soybean, etc. are important sources of nutrition and contribute to a significant amount of biological nitrogen fixation (>20 million tons of fixed nitrogen) in agriculture. However, the production of legumes is constrained due to abiotic and biotic stresses. It is therefore imperative to understand the molecular mechanisms of plant response to different stresses and identify key candidate genes regulating tolerance which can be deployed in breeding programs. The information obtained from transcriptomics has facilitated the identification of candidate genes for the given trait of interest and utilizing them in crop breeding programs to improve stress tolerance. However, the mechanisms of stress tolerance are complex due to the influence of multi-genes and post-transcriptional regulations. Furthermore, stress conditions greatly affect gene expression which in turn causes modifications in the composition of plant proteomes and metabolomes. Therefore, functional genomics involving various proteomics and metabolomics approaches have been obligatory for understanding plant stress tolerance. These approaches have also been found useful to unravel different pathways related to plant and seed development as well as symbiosis. Proteome and metabolome profiling using high-throughput based systems have been extensively applied in the model legume species, Medicago truncatula and Lotus japonicus, as well as in the model crop legume, soybean, to examine stress signaling pathways, cellular and developmental processes and nodule symbiosis. Moreover, the availability of protein reference maps as well as proteomics and metabolomics databases greatly support research and understanding of various biological processes in legumes. Protein-protein interaction techniques, particularly the yeast two-hybrid system have been advantageous for studying symbiosis and stress signaling in legumes. In this review, several

  17. Proteomics and Metabolomics: Two Emerging Areas for Legume Improvement.

    PubMed

    Ramalingam, Abirami; Kudapa, Himabindu; Pazhamala, Lekha T; Weckwerth, Wolfram; Varshney, Rajeev K

    2015-01-01

    The crop legumes such as chickpea, common bean, cowpea, peanut, pigeonpea, soybean, etc. are important sources of nutrition and contribute to a significant amount of biological nitrogen fixation (>20 million tons of fixed nitrogen) in agriculture. However, the production of legumes is constrained due to abiotic and biotic stresses. It is therefore imperative to understand the molecular mechanisms of plant response to different stresses and identify key candidate genes regulating tolerance which can be deployed in breeding programs. The information obtained from transcriptomics has facilitated the identification of candidate genes for the given trait of interest and utilizing them in crop breeding programs to improve stress tolerance. However, the mechanisms of stress tolerance are complex due to the influence of multi-genes and post-transcriptional regulations. Furthermore, stress conditions greatly affect gene expression which in turn causes modifications in the composition of plant proteomes and metabolomes. Therefore, functional genomics involving various proteomics and metabolomics approaches have been obligatory for understanding plant stress tolerance. These approaches have also been found useful to unravel different pathways related to plant and seed development as well as symbiosis. Proteome and metabolome profiling using high-throughput based systems have been extensively applied in the model legume species, Medicago truncatula and Lotus japonicus, as well as in the model crop legume, soybean, to examine stress signaling pathways, cellular and developmental processes and nodule symbiosis. Moreover, the availability of protein reference maps as well as proteomics and metabolomics databases greatly support research and understanding of various biological processes in legumes. Protein-protein interaction techniques, particularly the yeast two-hybrid system have been advantageous for studying symbiosis and stress signaling in legumes. In this review, several

  18. Cosmology with Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Ho, Shirley; Cuesta, A.; Ross, A.; Seo, H.; DePutter, R.; Padmanabhan, N.; White, M.; Myers, A.; Bovy, J.; Blanton, M.; Hernandez, C.; Mena, O.; Percival, W.; Prada, F.; Ross, N. P.; Saito, S.; Schneider, D.; Skibba, R.; Smith, K.; Slosar, A.; Strauss, M.; Verde, L.; Weinberg, D.; Bachall, N.; Brinkmann, J.; da Costa, L. A.

    2012-01-01

    The Sloan Digital Sky Survey I-III surveyed 14,000 square degrees, and delivered over a trillion pixels of imaging data. I present cosmological results from this unprecedented data set which contains over a million galaxies distributed between redshift of 0.45 to 0.70. With such a large volume of data set, high precision cosmological constraints can be obtained given a careful control and understanding of observational systematics. I present a novel treatment of observational systematics and its application to the clustering signals from the data set. I will present cosmological constraints on dark components of the Universe and tightest constraints of the non-gaussianity of early Universe to date utilizing Large Scale Structure.

  19. Large scale biomimetic membrane arrays.

    PubMed

    Hansen, Jesper S; Perry, Mark; Vogel, Jörg; Groth, Jesper S; Vissing, Thomas; Larsen, Marianne S; Geschke, Oliver; Emneús, Jenny; Bohr, Henrik; Nielsen, Claus H

    2009-10-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO(2) laser micro-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 microm. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays, and furthermore demonstrate that the design can conveniently be scaled up to support planar lipid bilayers in large square-centimeter partition arrays.

  20. Reconstruction of Metabolic Pathways, Protein Expression, and Homeostasis Machineries across Maize Bundle Sheath and Mesophyll Chloroplasts: Large-Scale Quantitative Proteomics Using the First Maize Genome Assembly1[W][OA

    PubMed Central

    Friso, Giulia; Majeran, Wojciech; Huang, Mingshu; Sun, Qi; van Wijk, Klaas J.

    2010-01-01

    Chloroplasts in differentiated bundle sheath (BS) and mesophyll (M) cells of maize (Zea mays) leaves are specialized to accommodate C4 photosynthesis. This study provides a reconstruction of how metabolic pathways, protein expression, and homeostasis functions are quantitatively distributed across BS and M chloroplasts. This yielded new insights into cellular specialization. The experimental analysis was based on high-accuracy mass spectrometry, protein quantification by spectral counting, and the first maize genome assembly. A bioinformatics workflow was developed to deal with gene models, protein families, and gene duplications related to the polyploidy of maize; this avoided overidentification of proteins and resulted in more accurate protein quantification. A total of 1,105 proteins were assigned as potential chloroplast proteins, annotated for function, and quantified. Nearly complete coverage of primary carbon, starch, and tetrapyrole metabolism, as well as excellent coverage for fatty acid synthesis, isoprenoid, sulfur, nitrogen, and amino acid metabolism, was obtained. This showed, for example, quantitative and qualitative cell type-specific specialization in starch biosynthesis, arginine synthesis, nitrogen assimilation, and initial steps in sulfur assimilation. An extensive overview of BS and M chloroplast protein expression and homeostasis machineries (more than 200 proteins) demonstrated qualitative and quantitative differences between M and BS chloroplasts and BS-enhanced levels of the specialized chaperones ClpB3 and HSP90 that suggest active remodeling of the BS proteome. The reconstructed pathways are presented as detailed flow diagrams including annotation, relative protein abundance, and cell-specific expression pattern. Protein annotation and identification data, and projection of matched peptides on the protein models, are available online through the Plant Proteome Database. PMID:20089766

  1. The role of quality improvement in achieving effective large-scale prevention of mother-to-child transmission of HIV in South Africa.

    PubMed

    Barker, Pierre; Barron, Peter; Bhardwaj, Sanjana; Pillay, Yogan

    2015-07-01

    After a late start and poor initial performance, the South African Prevention of Mother-To-Child Transmission (PMTCT) programme achieved rapid progress in achieving effective national-scale implementation of a complex intervention across a large number of different geographic and socioeconomic contexts. This study shows how quality-improvement methods played a significant part in PMTCT improvements. The South African rollout of the PMTCT programme underwent significant evolution, from a largely ineffective, context-insensitive, top-down cascaded training approach to a sophisticated bottom-up health systems' intervention that used modern adaptive designs. Several demonstration projects used quality-improvement methods to improve the performance of the PMTCT programme. These results prompted a national redesign of key elements of the PMTCT programme which were rapidly scaled up across the country using a unified, simplified data-driven approach. The scale up of the quality-improvement approach contributed to a dramatic fall in the nationally reported transmission rate for mother to child transmission of HIV. By 2012, measured infection rate of HIV-exposed infants at around 6 weeks after birth was 2.6%, close to the reported transmission rates under clinical trial conditions. Quality-improvement methods can be used to improve reliability of complex treatment programmes delivered at primary-care level. Rapid scale up and effective population coverage can be accomplished through a sequence of demonstration, testing and rapid spread of locally tested implementation strategies supported by real-time feedback of a simplified indicator dataset and multilevel leadership support.

  2. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  3. Psychological effects of patient surge in large-scale emergencies: a quality improvement tool for hospital and clinic capacity planning and response.

    PubMed

    Meredith, Lisa S; Zazzali, James L; Shields, Sandra; Eisenman, David P; Alsabagh, Halla

    2010-01-01

    Although information is available to guide hospitals and clinics on the medical aspects of disaster surge, there is little guidance on how to manage the expected surge of persons needing psychological assessment and response after a catastrophic event. This neglected area of disaster medicine is addressed by presenting a novel and practical quality improvement tool for hospitals and clinics to use in planning for and responding to the psychological consequences of catastrophic events that create a surge of psychological casualties presenting for health care. Industrial quality improvement processes, already widely adopted in the healthcare sector, translate well when applied to disaster medicine and public health preparedness. This paper describes the development of the tool, presents data on facility preparedness from 31 hospitals and clinics in Los Angeles County, and discusses how the tool can be used as a benchmark for targeting improvement. The tool can serve to increase facility awareness of which components of disaster preparedness and response must be addressed through hospitals' and clinics' existing quality improvement programs. It also can provide information for periodic assessment and evaluation of progress over time.

  4. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  5. Large-Scale Sequence Comparison.

    PubMed

    Lal, Devi; Verma, Mansi

    2017-01-01

    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  6. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  7. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  8. Dynameomics: data-driven methods and models for utilizing large-scale protein structure repositories for improving fragment-based loop prediction.

    PubMed

    Rysavy, Steven J; Beck, David A C; Daggett, Valerie

    2014-11-01

    Protein function is intimately linked to protein structure and dynamics yet experimentally determined structures frequently omit regions within a protein due to indeterminate data, which is often due protein dynamics. We propose that atomistic molecular dynamics simulations provide a diverse sampling of biologically relevant structures for these missing segments (and beyond) to improve structural modeling and structure prediction. Here we make use of the Dynameomics data warehouse, which contains simulations of representatives of essentially all known protein folds. We developed novel computational methods to efficiently identify, rank and retrieve small peptide structures, or fragments, from this database. We also created a novel data model to analyze and compare large repositories of structural data, such as contained within the Protein Data Bank and the Dynameomics data warehouse. Our evaluation compares these structural repositories for improving loop predictions and analyzes the utility of our methods and models. Using a standard set of loop structures, containing 510 loops, 30 for each loop length from 4 to 20 residues, we find that the inclusion of Dynameomics structures in fragment-based methods improves the quality of the loop predictions without being dependent on sequence homology. Depending on loop length, ∼ 25-75% of the best predictions came from the Dynameomics set, resulting in lower main chain root-mean-square deviations for all fragment lengths using the combined fragment library. We also provide specific cases where Dynameomics fragments provide better predictions for NMR loop structures than fragments from crystal structures. Online access to these fragment libraries is available at http://www.dynameomics.org/fragments. © 2014 The Protein Society.

  9. Dynameomics: Data-driven methods and models for utilizing large-scale protein structure repositories for improving fragment-based loop prediction

    PubMed Central

    Rysavy, Steven J; Beck, David AC; Daggett, Valerie

    2014-01-01

    Protein function is intimately linked to protein structure and dynamics yet experimentally determined structures frequently omit regions within a protein due to indeterminate data, which is often due protein dynamics. We propose that atomistic molecular dynamics simulations provide a diverse sampling of biologically relevant structures for these missing segments (and beyond) to improve structural modeling and structure prediction. Here we make use of the Dynameomics data warehouse, which contains simulations of representatives of essentially all known protein folds. We developed novel computational methods to efficiently identify, rank and retrieve small peptide structures, or fragments, from this database. We also created a novel data model to analyze and compare large repositories of structural data, such as contained within the Protein Data Bank and the Dynameomics data warehouse. Our evaluation compares these structural repositories for improving loop predictions and analyzes the utility of our methods and models. Using a standard set of loop structures, containing 510 loops, 30 for each loop length from 4 to 20 residues, we find that the inclusion of Dynameomics structures in fragment-based methods improves the quality of the loop predictions without being dependent on sequence homology. Depending on loop length, ∼25–75% of the best predictions came from the Dynameomics set, resulting in lower main chain root-mean-square deviations for all fragment lengths using the combined fragment library. We also provide specific cases where Dynameomics fragments provide better predictions for NMR loop structures than fragments from crystal structures. Online access to these fragment libraries is available at http://www.dynameomics.org/fragments. PMID:25142412

  10. A Large Scale Automatic Earthquake Location Catalog in the San Jacinto Fault Zone Area Using An Improved Shear-Wave Detection Algorithm

    NASA Astrophysics Data System (ADS)

    White, M. C. A.; Ross, Z.; Vernon, F.; Ben-Zion, Y.

    2015-12-01

    UC San Diego's ANZA network began archiving event-triggered data in 1982. As a result of improved recording technology, continuous waveform data archives are available starting in 1998. This continuous dataset, from 1998-present, represents a wealth of potential insight into spatio-temporal seismicity patterns, earthquake physics and mechanics of the San Jacinto Fault Zone. However, the volume of data renders manual analysis costly. In order to investigate the characteristics of the data in space and time, an automatic earthquake location catalog is needed. To this end, we apply standard earthquake signal processing techniques to the continuous data to detect first-arriving P-waves in combination with a recently developed S-wave detection algorithm. The resulting dataset of arrival time observations are processed using a grid association algorithm to produce initial absolute locations which are refined using a location inversion method that accounts for 3-D velocity heterogeneities. Precise relative locations are then derived from the refined absolute locations using the HypoDD double-difference algorithm. Moment magnitudes for the events are estimated from multi-taper spectral analysis. A >650% increase in the S:P pick ratio is achieved using the updated S-wave detection algorithm, when compared to the currently available catalog for the ANZA network. The increased number of S-wave observations leads to improved earthquake location accuracy and reliability (ie. less false event detections). Various aspects of spatio-temporal seismicity patterns and size distributions are investigated. Updated results will be presented at the meeting.

  11. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation.

    PubMed

    Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-07-29

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.

  12. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation

    PubMed Central

    Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-01-01

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946

  13. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  14. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  15. The utilization of gum tragacanth to improve the growth of Rhodotorula aurantiaca and the production of gamma-decalactone in large scale.

    PubMed

    Alchihab, Mohamed; Destain, Jacqueline; Aguedo, Mario; Wathelet, Jean-Paul; Thonart, Philippe

    2010-09-01

    The production of gamma-decalactone and 4-hydroxydecanoic acid by the psychrophilic yeast R. aurantiaca was studied. The effect of both compounds on the growth of R. aurantiaca was also investigated and our results show that gamma-decalactone must be one of the limiting factors for its production. The addition of gum tragacanth to the medium at concentrations of 3 and 4 g/l seems to be an adequate strategy to enhance gamma-decalactone production and to reduce its toxicity towards the cell. The production of gamma-decalactone and 4-hydroxydecanoic acid was significantly higher in 20-l bioreactor than in 100-l bioreactor. By using 20 g/l of castor oil, 6.5 and 4.5 g/l of gamma-decalactone were extracted after acidification at pH 2.0 and distillation at 100 degrees C for 45 min in 20- and 100-l bioreactors, respectively. We propose a process at industrial scale using a psychrophilic yeast to produce naturally gamma-decalactone from castor oil which acts also as a detoxifying agent; moreover the process was improved by adding a natural gum.

  16. Improving AFLP analysis of large-scale patterns of genetic variation--a case study with the Central African lianas Haumania spp (Marantaceae) showing interspecific gene flow.

    PubMed

    Ley, A C; Hardy, O J

    2013-04-01

    AFLP markers are often used to study patterns of population genetic variation and gene flow because they offer a good coverage of the nuclear genome, but the reliability of AFLP scoring is critical. To assess interspecific gene flow in two African rainforest liana species (Haumania danckelmaniana, H. liebrechtsiana) where previous evidence of chloroplast captures questioned the importance of hybridization and species boundaries, we developed new AFLP markers and a novel approach to select reliable bands from their degree of reproducibility. The latter is based on the estimation of the broad-sense heritability of AFLP phenotypes, an improvement over classical scoring error rates, which showed that the polymorphism of most AFLP bands was affected by a substantial nongenetic component. Therefore, using a quantitative genetics framework, we also modified an existing estimator of pairwise kinship coefficient between individuals correcting for the limited heritability of markers. Bayesian clustering confirms the recognition of the two Haumania species. Nevertheless, the decay of the relatedness between individuals of distinct species with geographic distance demonstrates that hybridization affects the nuclear genome. In conclusion, although we showed that AFLP markers might be substantially affected by nongenetic factors, their analysis using the new methods developed considerably advanced our understanding of the pattern of gene flow in our model species.

  17. Large-scale hydrological modeling for calculating water stress indices: implications of improved spatiotemporal resolution, surface-groundwater differentiation, and uncertainty characterization.

    PubMed

    Scherer, Laura; Venkatesh, Aranya; Karuppiah, Ramkumar; Pfister, Stephan

    2015-04-21

    Physical water scarcities can be described by water stress indices. These are often determined at an annual scale and a watershed level; however, such scales mask seasonal fluctuations and spatial heterogeneity within a watershed. In order to account for this level of detail, first and foremost, water availability estimates must be improved and refined. State-of-the-art global hydrological models such as WaterGAP and UNH/GRDC have previously been unable to reliably reflect water availability at the subbasin scale. In this study, the Soil and Water Assessment Tool (SWAT) was tested as an alternative to global models, using the case study of the Mississippi watershed. While SWAT clearly outperformed the global models at the scale of a large watershed, it was judged to be unsuitable for global scale simulations due to the high calibration efforts required. The results obtained in this study show that global assessments miss out on key aspects related to upstream/downstream relations and monthly fluctuations, which are important both for the characterization of water scarcity in the Mississippi watershed and for water footprints. Especially in arid regions, where scarcity is high, these models provide unsatisfying results.

  18. Internationalization Measures in Large Scale Research Projects

    NASA Astrophysics Data System (ADS)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  19. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  20. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  1. Large-scale parametric survival analysis.

    PubMed

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  2. Large-Scale Parametric Survival Analysis†

    PubMed Central

    Mittal, Sushil; Madigan, David; Cheng, Jerry; Burd, Randall S.

    2013-01-01

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power has led to considerable interest in analyzing very high-dimensional data where the number of predictor variables and the number of observations range between 104 – 106. In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models. PMID:23625862

  3. Improved 2D Nano-LC/MS for Proteomics Applications: A Comparative Analysis Using Yeast Proteome

    PubMed Central

    Nägele, E.; Vollmer, M.; Hörth, P.

    2004-01-01

    The most commonly used method for protein identification with two-dimensional (2D) online liquid chromatography-mass spectrometry (LC/MS) involves the elution of digest peptides from a strong cation exchange column by an injected salt step gradient of increasing salt concentration followed by reversed phase separation. However, in this approach ion exchange chromatography does not perform to its fullest extent, primarily because the injected volume of salt solution is not optimized to the SCX column. To improve the performance of strong cation exchange chromatography, we developed a new method for 2D online nano-LC/MS that replaces the injected salt step gradient with an optimized semicontinuous pumped salt gradient. The viability of this method is demonstrated in the results of a comparative analysis of a complex tryptic digest of the yeast proteome using the injected salt solution method and the semicontinuous pump salt method. The semicontinuous pump salt method compares favorably with the commonly used injection method and also with an offline 2D-LC method. PMID:15190086

  4. Rice proteomics: a model system for crop improvement and food security.

    PubMed

    Kim, Sun Tae; Kim, Sang Gon; Agrawal, Ganesh Kumar; Kikuchi, Shoshi; Rakwal, Randeep

    2014-03-01

    Rice proteomics has progressed at a tremendous pace since the year 2000, and that has resulted in establishing and understanding the proteomes of tissues, organs, and organelles under both normal and abnormal (adverse) environmental conditions. Established proteomes have also helped in re-annotating the rice genome and revealing the new role of previously known proteins. The progress of rice proteomics had recognized it as the corner/stepping stone for at least cereal crops. Rice proteomics remains a model system for crops as per its exemplary proteomics research. Proteomics-based discoveries in rice are likely to be translated in improving crop plants and vice versa against ever-changing environmental factors. This review comprehensively covers rice proteomics studies from August 2010 to July 2013, with major focus on rice responses to diverse abiotic (drought, salt, oxidative, temperature, nutrient, hormone, metal ions, UV radiation, and ozone) as well as various biotic stresses, especially rice-pathogen interactions. The differentially regulated proteins in response to various abiotic stresses in different tissues have also been summarized, indicating key metabolic and regulatory pathways. We envision a significant role of rice proteomics in addressing the global ground level problem of food security, to meet the demands of the human population which is expected to reach six to nine billion by 2040.

  5. Tools for Interpreting Large-Scale Protein Profiling in Microbiology

    PubMed Central

    Hendrickson, E. L.; Lamont, R. J.; Hackett, M.

    2009-01-01

    Quantitative proteome analysis of microbial systems generates large datasets that can be difficult and time consuming to interpret. Fortunately, many of the data display and gene clustering tools developed to analyze large transcriptome microarray datasets are also applicable to proteomes. Plots of abundance ratio versus total signal or spectral counts can highlight regions of random error and putative change. Displaying data in the physical order of the genes in the genome sequence can highlight potential operons. At a basic level of transcriptional organization, identifying operons can give insights into regulatory pathways as well as provide corroborating evidence for proteomic results. Classification and clustering algorithms can group proteins together by their abundance changes under different conditions, helping to identify interesting expression patterns, but often work poorly with noisy data like that typically generated in a large-scale proteome analysis. Biological interpretation can be aided more directly by overlaying differential protein abundance data onto metabolic pathways, indicating pathways with altered activities. More broadly, ontology tools detect altered levels of protein abundance for different metabolic pathways, molecular functions and cellular localizations. In practice, pathway analysis and ontology are limited by the level of database curation associated with the organism of interest. PMID:18946006

  6. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  7. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  8. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  9. Large Scale Metal Additive Techniques Review

    SciTech Connect

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W; Love, Lonnie J

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  10. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  11. The Large -scale Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Flin, Piotr

    A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.

  12. Advances in plant proteomics toward improvement of crop productivity and stress resistancex

    PubMed Central

    Hu, Junjie; Rampitsch, Christof; Bykova, Natalia V.

    2015-01-01

    Abiotic and biotic stresses constrain plant growth and development negatively impacting crop production. Plants have developed stress-specific adaptations as well as simultaneous responses to a combination of various abiotic stresses with pathogen infection. The efficiency of stress-induced adaptive responses is dependent on activation of molecular signaling pathways and intracellular networks by modulating expression, or abundance, and/or post-translational modification (PTM) of proteins primarily associated with defense mechanisms. In this review, we summarize and evaluate the contribution of proteomic studies to our understanding of stress response mechanisms in different plant organs and tissues. Advanced quantitative proteomic techniques have improved the coverage of total proteomes and sub-proteomes from small amounts of starting material, and characterized PTMs as well as protein–protein interactions at the cellular level, providing detailed information on organ- and tissue-specific regulatory mechanisms responding to a variety of individual stresses or stress combinations during plant life cycle. In particular, we address the tissue-specific signaling networks localized to various organelles that participate in stress-related physiological plasticity and adaptive mechanisms, such as photosynthetic efficiency, symbiotic nitrogen fixation, plant growth, tolerance and common responses to environmental stresses. We also provide an update on the progress of proteomics with major crop species and discuss the current challenges and limitations inherent to proteomics techniques and data interpretation for non-model organisms. Future directions in proteomics research toward crop improvement are further discussed. PMID:25926838

  13. Automated workflow for large-scale selected reaction monitoring experiments.

    PubMed

    Malmström, Lars; Malmström, Johan; Selevsek, Nathalie; Rosenberger, George; Aebersold, Ruedi

    2012-03-02

    Targeted proteomics allows researchers to study proteins of interest without being drowned in data from other, less interesting proteins or from redundant or uninformative peptides. While the technique is mostly used for smaller, focused studies, there are several reasons to conduct larger targeted experiments. Automated, highly robust software becomes more important in such experiments. In addition, larger experiments are carried out over longer periods of time, requiring strategies to handle the sometimes large shift in retention time often observed. We present a complete proof-of-principle software stack that automates most aspects of selected reaction monitoring workflows, a targeted proteomics technology. The software allows experiments to be easily designed and carried out. The steps automated are the generation of assays, generation of mass spectrometry driver files and methods files, and the import and analysis of the data. All data are normalized to a common retention time scale, the data are then scored using a novel score model, and the error is subsequently estimated. We also show that selected reaction monitoring can be used for label-free quantification. All data generated are stored in a relational database, and the growing resource further facilitates the design of new experiments. We apply the technology to a large-scale experiment studying how Streptococcus pyogenes remodels its proteome under stimulation of human plasma.

  14. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  15. Improved protocol for chromatofocusing on the ProteomeLab PF2D.

    PubMed

    Barré, Olivier; Solioz, Marc

    2006-10-01

    Beckman-Coulter has recently introduced the ProteomeLab PF2D for 2-D liquid separation of protein samples. The system features separation in the first dimension by chromatofocusing, followed by RP chromatography in the second dimension, allowing the analysis of complex proteomics samples. When used by the standard protocol, reproducibility and column life times are limited, making the use of the instrument very costly. We here present an improved protocol for chromatofocusing, which enhances column life by at least fivefold.

  16. Improved recovery and identification of membrane proteins from rat hepatic cells using a centrifugal proteomic reactor.

    PubMed

    Zhou, Hu; Wang, Fangjun; Wang, Yuwei; Ning, Zhibin; Hou, Weimin; Wright, Theodore G; Sundaram, Meenakshi; Zhong, Shumei; Yao, Zemin; Figeys, Daniel

    2011-10-01

    Despite their importance in many biological processes, membrane proteins are underrepresented in proteomic analysis because of their poor solubility (hydrophobicity) and often low abundance. We describe a novel approach for the identification of plasma membrane proteins and intracellular microsomal proteins that combines membrane fractionation, a centrifugal proteomic reactor for streamlined protein extraction, protein digestion and fractionation by centrifugation, and high performance liquid chromatography-electrospray ionization-tandem MS. The performance of this approach was illustrated for the study of the proteome of ER and Golgi microsomal membranes in rat hepatic cells. The centrifugal proteomic reactor identified 945 plasma membrane proteins and 955 microsomal membrane proteins, of which 63 and 47% were predicted as bona fide membrane proteins, respectively. Among these proteins, >800 proteins were undetectable by the conventional in-gel digestion approach. The majority of the membrane proteins only identified by the centrifugal proteomic reactor were proteins with ≥ 2 transmembrane segments or proteins with high molecular mass (e.g. >150 kDa) and hydrophobicity. The improved proteomic reactor allowed the detection of a group of endocytic and/or signaling receptor proteins on the plasma membrane, as well as apolipoproteins and glycerolipid synthesis enzymes that play a role in the assembly and secretion of apolipoprotein B100-containing very low density lipoproteins. Thus, the centrifugal proteomic reactor offers a new analytical tool for structure and function studies of membrane proteins involved in lipid and lipoprotein metabolism.

  17. Improved Recovery and Identification of Membrane Proteins from Rat Hepatic Cells using a Centrifugal Proteomic Reactor*

    PubMed Central

    Zhou, Hu; Wang, Fangjun; Wang, Yuwei; Ning, Zhibin; Hou, Weimin; Wright, Theodore G.; Sundaram, Meenakshi; Zhong, Shumei; Yao, Zemin; Figeys, Daniel

    2011-01-01

    Despite their importance in many biological processes, membrane proteins are underrepresented in proteomic analysis because of their poor solubility (hydrophobicity) and often low abundance. We describe a novel approach for the identification of plasma membrane proteins and intracellular microsomal proteins that combines membrane fractionation, a centrifugal proteomic reactor for streamlined protein extraction, protein digestion and fractionation by centrifugation, and high performance liquid chromatography-electrospray ionization-tandem MS. The performance of this approach was illustrated for the study of the proteome of ER and Golgi microsomal membranes in rat hepatic cells. The centrifugal proteomic reactor identified 945 plasma membrane proteins and 955 microsomal membrane proteins, of which 63 and 47% were predicted as bona fide membrane proteins, respectively. Among these proteins, >800 proteins were undetectable by the conventional in-gel digestion approach. The majority of the membrane proteins only identified by the centrifugal proteomic reactor were proteins with ≥2 transmembrane segments or proteins with high molecular mass (e.g. >150 kDa) and hydrophobicity. The improved proteomic reactor allowed the detection of a group of endocytic and/or signaling receptor proteins on the plasma membrane, as well as apolipoproteins and glycerolipid synthesis enzymes that play a role in the assembly and secretion of apolipoprotein B100-containing very low density lipoproteins. Thus, the centrifugal proteomic reactor offers a new analytical tool for structure and function studies of membrane proteins involved in lipid and lipoprotein metabolism. PMID:21749988

  18. Plant plasma membrane proteomics for improving cold tolerance.

    PubMed

    Takahashi, Daisuke; Li, Bin; Nakayama, Takato; Kawamura, Yukio; Uemura, Matsuo

    2013-01-01

    Plants are always exposed to various stresses. We have focused on freezing stress, which causes serious problems for agricultural management. When plants suffer freeze-induced damage, the plasma membrane is thought to be the primary site of injury because of its central role in regulation of various cellular processes. Cold tolerant species, however, adapt to such freezing conditions by modifying cellular components and functions (cold acclimation). One of the most important adaptation mechanisms to freezing is alteration of plasma membrane compositions and functions. Advanced proteomic technologies have succeeded in identification of many candidates that may play roles in adaptation of the plasma membrane to freezing stress. Proteomics results suggest that adaptations of plasma membrane functions to low temperature are associated with alterations of protein compositions during cold acclimation. Some of proteins identified by proteomic approaches have been verified their functional roles in freezing tolerance mechanisms further. Thus, accumulation of proteomic results in the plasma membrane is of importance for application to molecular breeding efforts to increase cold tolerance in crops.

  19. Large-scale cortical networks and cognition.

    PubMed

    Bressler, S L

    1995-03-01

    The well-known parcellation of the mammalian cerebral cortex into a large number of functionally distinct cytoarchitectonic areas presents a problem for understanding the complex cortical integrative functions that underlie cognition. How do cortical areas having unique individual functional properties cooperate to accomplish these complex operations? Do neurons distributed throughout the cerebral cortex act together in large-scale functional assemblages? This review examines the substantial body of evidence supporting the view that complex integrative functions are carried out by large-scale networks of cortical areas. Pathway tracing studies in non-human primates have revealed widely distributed networks of interconnected cortical areas, providing an anatomical substrate for large-scale parallel processing of information in the cerebral cortex. Functional coactivation of multiple cortical areas has been demonstrated by neurophysiological studies in non-human primates and several different cognitive functions have been shown to depend on multiple distributed areas by human neuropsychological studies. Electrophysiological studies on interareal synchronization have provided evidence that active neurons in different cortical areas may become not only coactive, but also functionally interdependent. The computational advantages of synchronization between cortical areas in large-scale networks have been elucidated by studies using artificial neural network models. Recent observations of time-varying multi-areal cortical synchronization suggest that the functional topology of a large-scale cortical network is dynamically reorganized during visuomotor behavior.

  20. Characterization of quinoa seed proteome combining different protein precipitation techniques: Improvement of knowledge of nonmodel plant proteomics.

    PubMed

    Capriotti, Anna Laura; Cavaliere, Chiara; Piovesana, Susy; Stampachiacchiere, Serena; Ventura, Salvatore; Zenezini Chiozzi, Riccardo; Laganà, Aldo

    2015-03-01

    A shotgun proteomics approach was used to characterize the quinoa seed proteome. To obtain comprehensive proteomic data from quinoa seeds three different precipitation procedures were employed: MeOH/CHCl3 /double-distilled H2 O, acetone either alone or with trichloroacetic acid; the isolated proteins were then in-solution digested and the resulting peptides were analyzed by nano-liquid chromatography coupled to tandem mass spectrometry. However, since quinoa is a nonmodel plant species, only a few protein sequences are included in the most widely known protein sequence databases. To improve the data reliability a UniProt subdatabase, containing only proteins of Caryophillales order, was used. A total of 352 proteins were identified and evaluated both from a qualitative and quantitative point of view. This combined approach is certainly useful to increase the final number of identifications, but no particular class of proteins was extracted and identified in spite of the different chemistries and the different precipitation protocols. However, with respect to the other two procedures, from the relative quantitative analysis, based on the number of spectral counts, the trichloroacetic acid/acetone protocol was the best procedure for sample handling and quantitative protein extraction. This study could pave the way to further high-throughput studies on Chenopodium Quinoa.

  1. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  2. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  3. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  4. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  5. Wie konnen die Ergebnisse von vergleichenden Leistungsstudien systematisch zur Qualitatsverbesserung in Schulen genutzt werden? (How Can the Results of Large Scale Assessments Be Used Systematically for an Improvement of the Quality of Schools?).

    ERIC Educational Resources Information Center

    Terhart, Ewald

    2002-01-01

    Compares the relationship between large scale assessments and positive development of schools and instruction. Discusses strategic orientations to further quality of school systems and outlines possible starting points for the future development of individual schools. Probes the demands of a specific type of research that combines analysis and…

  6. An improved protocol to study the plant cell wall proteome

    PubMed Central

    Printz, Bruno; Dos Santos Morais, Raphaël; Wienkoop, Stefanie; Sergeant, Kjell; Lutts, Stanley; Hausman, Jean-Francois; Renaut, Jenny

    2015-01-01

    Cell wall proteins were extracted from alfalfa stems according to a three-steps extraction procedure using sequentially CaCl2, EGTA, and LiCl-complemented buffers. The efficiency of this protocol for extracting cell wall proteins was compared with the two previously published methods optimized for alfalfa stem cell wall protein analysis. Following LC-MS/MS analysis the three-steps extraction procedure resulted in the identification of the highest number of cell wall proteins (242 NCBInr identifiers) and gave the lowest percentage of non-cell wall proteins (about 30%). However, the three protocols are rather complementary than substitutive since 43% of the identified proteins were specific to one protocol. This three-step protocol was therefore selected for a more detailed proteomic characterization using 2D-gel electrophoresis. With this technique, 75% of the identified proteins were shown to be fraction-specific and 72.7% were predicted as belonging to the cell wall compartment. Although, being less sensitive than LC-MS/MS approaches in detecting and identifying low-abundant proteins, gel-based approaches are valuable tools for the differentiation and relative quantification of protein isoforms and/or modified proteins. In particular isoforms, having variations in their amino-acid sequence and/or carrying different N-linked glycan chains were detected and characterized. This study highlights how the extracting protocols as well as the analytical techniques devoted to the study of the plant cell wall proteome are complementary and how they may be combined to elucidate the dynamism of the plant cell wall proteome in biological studies. Data are available via ProteomeXchange with identifier PXD001927. PMID:25914713

  7. Large Scale Processes and Extreme Floods in Brazil

    NASA Astrophysics Data System (ADS)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  8. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  9. Large-scale multimedia modeling applications

    SciTech Connect

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  10. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  11. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  12. CPTAC | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is a national effort to accelerate the understanding of the molecular basis of cancer through the application of large-scale proteome and genome analysis, or proteogenomics.

  13. Functional Module Search in Protein Networks based on Semantic Similarity Improves the Analysis of Proteomics Data*

    PubMed Central

    Boyanova, Desislava; Nilla, Santosh; Klau, Gunnar W.; Dandekar, Thomas; Müller, Tobias; Dittrich, Marcus

    2014-01-01

    The continuously evolving field of proteomics produces increasing amounts of data while improving the quality of protein identifications. Albeit quantitative measurements are becoming more popular, many proteomic studies are still based on non-quantitative methods for protein identification. These studies result in potentially large sets of identified proteins, where the biological interpretation of proteins can be challenging. Systems biology develops innovative network-based methods, which allow an integrated analysis of these data. Here we present a novel approach, which combines prior knowledge of protein-protein interactions (PPI) with proteomics data using functional similarity measurements of interacting proteins. This integrated network analysis exactly identifies network modules with a maximal consistent functional similarity reflecting biological processes of the investigated cells. We validated our approach on small (H9N2 virus-infected gastric cells) and large (blood constituents) proteomic data sets. Using this novel algorithm, we identified characteristic functional modules in virus-infected cells, comprising key signaling proteins (e.g. the stress-related kinase RAF1) and demonstrate that this method allows a module-based functional characterization of cell types. Analysis of a large proteome data set of blood constituents resulted in clear separation of blood cells according to their developmental origin. A detailed investigation of the T-cell proteome further illustrates how the algorithm partitions large networks into functional subnetworks each representing specific cellular functions. These results demonstrate that the integrated network approach not only allows a detailed analysis of proteome networks but also yields a functional decomposition of complex proteomic data sets and thereby provides deeper insights into the underlying cellular processes of the investigated system. PMID:24807868

  14. Functional module search in protein networks based on semantic similarity improves the analysis of proteomics data.

    PubMed

    Boyanova, Desislava; Nilla, Santosh; Klau, Gunnar W; Dandekar, Thomas; Müller, Tobias; Dittrich, Marcus

    2014-07-01

    The continuously evolving field of proteomics produces increasing amounts of data while improving the quality of protein identifications. Albeit quantitative measurements are becoming more popular, many proteomic studies are still based on non-quantitative methods for protein identification. These studies result in potentially large sets of identified proteins, where the biological interpretation of proteins can be challenging. Systems biology develops innovative network-based methods, which allow an integrated analysis of these data. Here we present a novel approach, which combines prior knowledge of protein-protein interactions (PPI) with proteomics data using functional similarity measurements of interacting proteins. This integrated network analysis exactly identifies network modules with a maximal consistent functional similarity reflecting biological processes of the investigated cells. We validated our approach on small (H9N2 virus-infected gastric cells) and large (blood constituents) proteomic data sets. Using this novel algorithm, we identified characteristic functional modules in virus-infected cells, comprising key signaling proteins (e.g. the stress-related kinase RAF1) and demonstrate that this method allows a module-based functional characterization of cell types. Analysis of a large proteome data set of blood constituents resulted in clear separation of blood cells according to their developmental origin. A detailed investigation of the T-cell proteome further illustrates how the algorithm partitions large networks into functional subnetworks each representing specific cellular functions. These results demonstrate that the integrated network approach not only allows a detailed analysis of proteome networks but also yields a functional decomposition of complex proteomic data sets and thereby provides deeper insights into the underlying cellular processes of the investigated system.

  15. Batch effects correction improves the sensitivity of significance tests in spectral counting-based comparative discovery proteomics.

    PubMed

    Gregori, Josep; Villarreal, Laura; Méndez, Olga; Sánchez, Alex; Baselga, José; Villanueva, Josep

    2012-07-16

    Shotgun proteomics has become the standard proteomics technique for the large-scale measurement of protein abundances in biological samples. Despite quantitative proteomics has been usually performed using label-based approaches, label-free quantitation offers advantages related to the avoidance of labeling steps, no limitation in the number of samples to be compared, and the gain in protein detection sensitivity. However, since samples are analyzed separately, experimental design becomes critical. The exploration of spectral counting quantitation based on LC-MS presented here gathers experimental evidence of the influence of batch effects on comparative proteomics. The batch effects shown with spiking experiments clearly interfere with the biological signal. In order to minimize the interferences from batch effects, a statistical correction is proposed and implemented. Our results show that batch effects can be attenuated statistically when proper experimental design is used. Furthermore, the batch effect correction implemented leads to a substantial increase in the sensitivity of statistical tests. Finally, the applicability of our batch effects correction is shown on two different biomarker discovery projects involving cancer secretomes. We think that our findings will allow designing and executing better comparative proteomics projects and will help to avoid reaching false conclusions in the field of proteomics biomarker discovery. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Large-scale silicon optical switches for optical interconnection

    NASA Astrophysics Data System (ADS)

    Qiao, Lei; Tang, Weijie; Chu, Tao

    2016-11-01

    Large-scale optical switches are greatly demanded in building optical interconnections in data centers and high performance computers (HPCs). Silicon optical switches have advantages of being compact and CMOS process compatible, which can be easily monolithically integrated. However, there are difficulties to construct large ports silicon optical switches. One of them is the non-uniformity of the switch units in large scale silicon optical switches, which arises from the fabrication error and causes confusion in finding the unit optimum operation points. In this paper, we proposed a method to detect the optimum operating point in large scale switch with limited build-in power monitors. We also propose methods for improving the unbalanced crosstalk of cross/bar states in silicon electro-optical MZI switches and insertion losses. Our recent progress in large scale silicon optical switches, including 64 × 64 thermal-optical and 32 × 32 electro-optical switches will be introduced. To the best our knowledge, both of them are the largest scale silicon optical switches in their sections, respectively. The switches were fabricated on 340-nm SOI substrates with CMOS 180- nm processes. The crosstalk of the 32 × 32 electro-optic switch was -19.2dB to -25.1 dB, while the value of the 64 × 64 thermal-optic switch was -30 dB to -48.3 dB.

  17. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  18. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  19. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  20. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  1. Modeling Human Behavior at a Large Scale

    DTIC Science & Technology

    2012-01-01

    Discerning intentions in dynamic human action. Trends in Cognitive Sciences , 5(4):171 – 178, 2001. Shirli Bar-David, Israel Bar-David, Paul C. Cross, Sadie...Limits of predictability in human mobility. Science , 327(5968):1018, 2010. S.A. Stouffer. Intervening opportunities: a theory relating mobility and...Modeling Human Behavior at a Large Scale by Adam Sadilek Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

  2. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  3. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  4. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  5. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  6. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  7. Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics

    PubMed Central

    2016-01-01

    Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications. PMID:26152888

  8. Redox Proteomics and Platelet Activation: Understanding the Redox Proteome to Improve Platelet Quality for Transfusion

    PubMed Central

    Sonego, Giona; Abonnenc, Mélanie; Tissot, Jean-Daniel; Prudent, Michel; Lion, Niels

    2017-01-01

    Blood banks use pathogen inactivation (PI) technologies to increase the safety of platelet concentrates (PCs). The characteristics of PI-treated PCs slightly differ from those of untreated PCs, but the underlying reasons are not well understood. One possible cause is the generation of oxidative stress during the PI process. This is of great interest since reactive oxygen species (ROS) act as second messengers in platelet functions. Furthermore, there are links between protein oxidation and phosphorylation, another mechanism that is critical for cell regulation. Current research efforts focus on understanding the underlying mechanisms and identifying new target proteins. Proteomics technologies represent powerful tools for investigating signaling pathways involving ROS and post-translational modifications such as phosphorylation, while quantitative techniques enable the comparison of the platelet resting state versus the stimulated state. In particular, redox cysteine is a key player in platelet activation upon stimulation by different agonists. This review highlights the experiments that have provided insights into the roles of ROS in platelet function and the implications for platelet transfusion, and potentially in diseases such as inflammation and platelet hyperactivity. The review also describes the implication of redox mechanism in platelet storage considerations. PMID:28208668

  9. Complete solubilization of formalin-fixed, paraffin-embedded tissue may improve proteomic studies.

    PubMed

    Shi, Shan-Rong; Taylor, Clive R; Fowler, Carol B; Mason, Jeffrey T

    2013-04-01

    Tissue-based proteomic approaches (tissue proteomics) are essential for discovering and evaluating biomarkers for personalized medicine. In any proteomics study, the most critical issue is sample extraction and preparation. This problem is especially difficult when recovering proteins from formalin-fixed, paraffin-embedded (FFPE) tissue sections. However, improving and standardizing protein extraction from FFPE tissue is a critical need because of the millions of archival FFPE tissues available in tissue banks worldwide. Recent progress in the application of heat-induced antigen retrieval principles for protein extraction from FFPE tissue has resulted in a number of published FFPE tissue proteomics studies. However, there is currently no consensus on the optimal protocol for protein extraction from FFPE tissue or accepted standards for quantitative evaluation of the extracts. Standardization is critical to ensure the accurate evaluation of FFPE protein extracts by proteomic methods such as reverse phase protein arrays, which is now in clinical use. In our view, complete solubilization of FFPE tissue samples is the best way to achieve the goal of standardizing the recovery of proteins from FFPE tissues. However, further studies are recommended to develop standardized protein extraction methods to ensure quantitative and qualitative reproducibility in the recovery of proteins from FFPE tissues.

  10. Complete Solubilization of Formalin-Fixed, Paraffin-Embedded Tissue May Improve Proteomic Studies

    PubMed Central

    Shi, Shan-Rong; Taylor, Clive R; Fowler, Carol B; Mason, Jeffrey T

    2013-01-01

    Tissue-based proteomic approaches (tissue proteomics) are essential for discovering and evaluating biomarkers for personalized medicine. In any proteomics study, the most critical issue is sample extraction and preparation. This problem is especially difficult when recovering proteins from formalin-fixed, paraffin-embedded (FFPE) tissue sections. However, improving and standardizing protein extraction from FFPE tissue is a critical need because of the millions of archival FFPE tissues available in tissue banks worldwide. Recent progress in the application of heat-induced antigen retrieval (AR) principles for protein extraction from FFPE tissue has resulted in a number of published FFPE tissue proteomics studies. However, there is currently no consensus on the optimal protocol for protein extraction from FFPE tissue or accepted standards for quantitative evaluation of the extracts. Standardization is critical to ensure the accurate evaluation of FFPE protein extracts by proteomic methods such as reverse phase protein arrays (RPPA), which is now in clinical use. In our view, complete solubilization of FFPE tissue samples is the best way to achieve the goal of standardizing the recovery of proteins from FFPE tissues. However, further studies are recommended to develop standardized protein extraction methods to ensure quantitative and qualitative reproducibility in the recovery of proteins from FFPE tissues. PMID:23339100

  11. Improving Proteome Coverage on a LTQ-Orbitrap Using Design of Experiments

    NASA Astrophysics Data System (ADS)

    Andrews, Genna L.; Dean, Ralph A.; Hawkridge, Adam M.; Muddiman, David C.

    2011-04-01

    Design of experiments (DOE) was used to determine improved settings for a LTQ-Orbitrap XL to maximize proteome coverage of Saccharomyces cerevisiae. A total of nine instrument parameters were evaluated with the best values affording an increase of approximately 60% in proteome coverage. Utilizing JMP software, 2 DOE screening design tables were generated and used to specify parameter values for instrument methods. DOE 1, a fractional factorial design, required 32 methods fully resolving the investigation of six instrument parameters involving only half the time necessary for a full factorial design of the same resolution. It was advantageous to complete a full factorial design for the analysis of three additional instrument parameters. Measured with a maximum of 1% false discovery rate, protein groups, unique peptides, and spectral counts gauged instrument performance. Randomized triplicate nanoLC-LTQ-Orbitrap XL MS/MS analysis of the S. cerevisiae digest demonstrated that the following five parameters significantly influenced proteome coverage of the sample: (1) maximum ion trap ionization time; (2) monoisotopic precursor selection; (3) number of MS/MS events; (4) capillary temperature; and (5) tube lens voltage. Minimal influence on the proteome coverage was observed for the remaining four parameters (dynamic exclusion duration, resolving power, minimum count threshold to trigger a MS/MS event, and normalized collision energy). The DOE approach represents a time- and cost-effective method for empirically optimizing MS-based proteomics workflows including sample preparation, LC conditions, and multiple instrument platforms.

  12. Improving proteome coverage on a LTQ-Orbitrap using design of experiments.

    PubMed

    Andrews, Genna L; Dean, Ralph A; Hawkridge, Adam M; Muddiman, David C

    2011-04-01

    Design of experiments (DOE) was used to determine improved settings for a LTQ-Orbitrap XL to maximize proteome coverage of Saccharomyces cerevisiae. A total of nine instrument parameters were evaluated with the best values affording an increase of approximately 60% in proteome coverage. Utilizing JMP software, 2 DOE screening design tables were generated and used to specify parameter values for instrument methods. DOE 1, a fractional factorial design, required 32 methods fully resolving the investigation of six instrument parameters involving only half the time necessary for a full factorial design of the same resolution. It was advantageous to complete a full factorial design for the analysis of three additional instrument parameters. Measured with a maximum of 1% false discovery rate, protein groups, unique peptides, and spectral counts gauged instrument performance. Randomized triplicate nanoLC-LTQ-Orbitrap XL MS/MS analysis of the S. cerevisiae digest demonstrated that the following five parameters significantly influenced proteome coverage of the sample: (1) maximum ion trap ionization time; (2) monoisotopic precursor selection; (3) number of MS/MS events; (4) capillary temperature; and (5) tube lens voltage. Minimal influence on the proteome coverage was observed for the remaining four parameters (dynamic exclusion duration, resolving power, minimum count threshold to trigger a MS/MS event, and normalized collision energy). The DOE approach represents a time- and cost-effective method for empirically optimizing MS-based proteomics workflows including sample preparation, LC conditions, and multiple instrument platforms. © American Society for Mass Spectrometry, 2011

  13. Improving Proteome Coverage on a LTQ-Orbitrap Using Design of Experiments

    PubMed Central

    Andrews, Genna L.; Dean, Ralph A.; Hawkridge, Adam M.; Muddiman, David C.

    2011-01-01

    Design of experiments (DOE) was used to determine improved settings for a LTQ-Orbitrap XL to maximize proteome coverage of Saccharomyces cerevisiae. A total of nine instrument parameters were evaluated with the best values affording an increase of approximately 60% in proteome coverage. Utilizing JMP software, 2 DOE screening design tables were generated and used to specify parameter values for instrument methods. DOE 1, a fractional factorial design, required 32 methods fully resolving the investigation of six instrument parameters involving only half the time necessary for a full factorial design of the same resolution. It was advantageous to complete a full factorial design for the analysis of three additional instrument parameters. Measured with a maximum of 1% false discovery rate, protein groups, unique peptides, and spectral counts gauged instrument performance. Randomized triplicate nanoLC-LTQ-Orbitrap XL MS/MS analysis of the S. cerevisiae digest demonstrated that the following five parameters significantly influenced proteome coverage of the sample: (1) maximum ion trap ionization time; (2) monoisotopic precursor selection; (3) number of MS/MS events; (4) capillary temperature; and (5) tube lens voltage. Minimal influence on the proteome coverage was observed for the remaining four parameters (dynamic exclusion duration, resolving power, minimum count threshold to trigger a MS/MS event, and normalized collision energy). The DOE approach represents a time- and cost-effective method for empirically optimizing MS-based proteomics workflows including sample preparation, LC conditions, and multiple instrument platforms. PMID:21472614

  14. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  15. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  16. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  17. What is a large-scale dynamo?

    NASA Astrophysics Data System (ADS)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  18. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  19. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  20. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  1. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  2. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  3. Large-scale Heterogeneous Network Data Analysis

    DTIC Science & Technology

    2012-07-31

    Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo).  Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster)  Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio).  Hsun-Ping Hsieh , Cheng-Te Li, and Shou

  4. Proteomics: Challenges, Techniques and Possibilities to Overcome Biological Sample Complexity

    PubMed Central

    Chandramouli, Kondethimmanahalli; Qian, Pei-Yuan

    2009-01-01

    Proteomics is the large-scale study of the structure and function of proteins in complex biological sample. Such an approach has the potential value to understand the complex nature of the organism. Current proteomic tools allow large-scale, high-throughput analyses for the detection, identification, and functional investigation of proteome. Advances in protein fractionation and labeling techniques have improved protein identification to include the least abundant proteins. In addition, proteomics has been complemented by the analysis of posttranslational modifications and techniques for the quantitative comparison of different proteomes. However, the major limitation of proteomic investigations remains the complexity of biological structures and physiological processes, rendering the path of exploration paved with various difficulties and pitfalls. The quantity of data that is acquired with new techniques places new challenges on data processing and analysis. This article provides a brief overview of currently available proteomic techniques and their applications, followed by detailed description of advantages and technical challenges. Some solutions to circumvent technical difficulties are proposed. PMID:20948568

  5. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  6. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  7. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  8. Large-scale Globally Propagating Coronal Waves.

    PubMed

    Warmuth, Alexander

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  9. Channel capacity of next generation large scale MIMO systems

    NASA Astrophysics Data System (ADS)

    Alshammari, A.; Albdran, S.; Matin, M.

    2016-09-01

    Information rate that can be transferred over a given bandwidth is limited by the information theory. Capacity depends on many factors such as the signal to noise ratio (SNR), channel state information (CSI) and the spatial correlation in the propagation environment. It is very important to increase spectral efficiency in order to meet the growing demand for wireless services. Thus, Multiple input multiple output (MIMO) technology has been developed and applied in most of the wireless standards and it has been very successful in increasing capacity and reliability. As the demand is still increasing, attention now is shifting towards large scale multiple input multiple output (MIMO) which has a potential of bringing orders of magnitude of improvement in spectral and energy efficiency. It has been shown that users channels decorrelate after increasing the number of antennas. As a result, inter-user interference can be avoided since energy can be focused on precise directions. This paper investigates the limits of channel capacity for large scale MIMO. We study the relation between spectral efficiency and the number of antenna N. We use time division duplex (TDD) system in order to obtain CSI using training sequence in the uplink. The same CSI is used for the downlink because the channel is reciprocal. Spectral efficiency is measured for channel model that account for small scale fading while ignoring the effect of large scale fading. It is shown the spectral efficiency can be improved significantly when compared to single antenna systems in ideal circumstances.

  10. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  11. Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions

    PubMed Central

    Delaforge, Elise; Milles, Sigrid; Huang, Jie-rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J.; Blackledge, Martin

    2016-01-01

    Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales. PMID:27679800

  12. Efficient, large scale separation of coal macerals

    SciTech Connect

    Dyrkacz, G.R.; Bloomquist, C.A.A.

    1988-01-01

    The authors believe that the separation of macerals by continuous flow centrifugation offers a simple technique for the large scale separation of macerals. With relatively little cost (/approximately/ $10K), it provides an opportunity for obtaining quite pure maceral fractions. Although they have not completely worked out all the nuances of this separation system, they believe that the problems they have indicated can be minimized to pose only minor inconvenience. It cannot be said that this system completely bypasses the disagreeable tedium or time involved in separating macerals, nor will it by itself overcome the mental inertia required to make maceral separation an accepted necessary fact in fundamental coal science. However, they find their particular brand of continuous flow centrifugation is considerably faster than sink/float separation, can provide a good quality product with even one separation cycle, and permits the handling of more material than a conventional sink/float centrifuge separation.

  13. Primer design for large scale sequencing.

    PubMed Central

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-01-01

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects. PMID:9611248

  14. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  15. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  16. Large-scale optimization of neuron arbors

    NASA Astrophysics Data System (ADS)

    Cherniak, Christopher; Changizi, Mark; Won Kang, Du

    1999-05-01

    At the global as well as local scales, some of the geometry of types of neuron arbors-both dendrites and axons-appears to be self-organizing: Their morphogenesis behaves like flowing water, that is, fluid dynamically; waterflow in branching networks in turn acts like a tree composed of cords under tension, that is, vector mechanically. Branch diameters and angles and junction sites conform significantly to this model. The result is that such neuron tree samples globally minimize their total volume-rather than, for example, surface area or branch length. In addition, the arbors perform well at generating the cheapest topology interconnecting their terminals: their large-scale layouts are among the best of all such possible connecting patterns, approaching 5% of optimum. This model also applies comparably to arterial and river networks.

  17. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  18. Large scale preparation of pure phycobiliproteins.

    PubMed

    Padgett, M P; Krogmann, D W

    1987-01-01

    This paper describes simple procedures for the purification of large amounts of phycocyanin and allophycocyanin from the cyanobacterium Microcystis aeruginosa. A homogeneous natural bloom of this organism provided hundreds of kilograms of cells. Large samples of cells were broken by freezing and thawing. Repeated extraction of the broken cells with distilled water released phycocyanin first, then allophycocyanin, and provides supporting evidence for the current models of phycobilisome structure. The very low ionic strength of the aqueous extracts allowed allophycocyanin release in a particulate form so that this protein could be easily concentrated by centrifugation. Other proteins in the extract were enriched and concentrated by large scale membrane filtration. The biliproteins were purified to homogeneity by chromatography on DEAE cellulose. Purity was established by HPLC and by N-terminal amino acid sequence analysis. The proteins were examined for stability at various pHs and exposures to visible light.

  19. Primer design for large scale sequencing.

    PubMed

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-06-15

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects.

  20. Large-scale synthesis of peptides.

    PubMed

    Andersson, L; Blomberg, L; Flegel, M; Lepsa, L; Nilsson, B; Verlander, M

    2000-01-01

    Recent advances in the areas of formulation and delivery have rekindled the interest of the pharmaceutical community in peptides as drug candidates, which, in turn, has provided a challenge to the peptide industry to develop efficient methods for the manufacture of relatively complex peptides on scales of up to metric tons per year. This article focuses on chemical synthesis approaches for peptides, and presents an overview of the methods available and in use currently, together with a discussion of scale-up strategies. Examples of the different methods are discussed, together with solutions to some specific problems encountered during scale-up development. Finally, an overview is presented of issues common to all manufacturing methods, i.e., methods used for the large-scale purification and isolation of final bulk products and regulatory considerations to be addressed during scale-up of processes to commercial levels. Copyright 2000 John Wiley & Sons, Inc. Biopolymers (Pept Sci) 55: 227-250, 2000

  1. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  2. Jovian large-scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    West, R. A.; Friedson, A. J.; Appleby, J. F.

    1992-01-01

    An attempt is made to diagnose the annual-average mean meridional residual Jovian large-scale stratospheric circulation from observations of the temperature and reflected sunlight that reveal the morphology of the aerosol heating. The annual mean solar heating, total radiative flux divergence, mass stream function, and Eliassen-Palm flux divergence are shown. The stratospheric radiative flux divergence is dominated the high latitudes by aerosol absorption. Between the 270 and 100 mbar pressure levels, where there is no aerosol heating in the model, the structure of the circulation at low- to midlatitudes is governed by the meridional variation of infrared cooling in association with the variation of zonal mean temperatures observed by IRIS. The principal features of the vertical velocity profile found by Gierasch et al. (1986) are recovered in the present calculation.

  3. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  4. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  5. Modeling the Internet's large-scale topology

    PubMed Central

    Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László

    2002-01-01

    Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484

  6. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  7. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  9. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  10. Equivalent common path method in large-scale laser comparator

    NASA Astrophysics Data System (ADS)

    He, Mingzhao; Li, Jianshuang; Miao, Dongjing

    2015-02-01

    Large-scale laser comparator is main standard device that providing accurate, reliable and traceable measurements for high precision large-scale line and 3D measurement instruments. It mainly composed of guide rail, motion control system, environmental parameters monitoring system and displacement measurement system. In the laser comparator, the main error sources are temperature distribution, straightness of guide rail and pitch and yaw of measuring carriage. To minimize the measurement uncertainty, an equivalent common optical path scheme is proposed and implemented. Three laser interferometers are adjusted to parallel with the guide rail. The displacement in an arbitrary virtual optical path is calculated using three displacements without the knowledge of carriage orientations at start and end positions. The orientation of air floating carriage is calculated with displacements of three optical path and position of three retroreflectors which are precisely measured by Laser Tracker. A 4th laser interferometer is used in the virtual optical path as reference to verify this compensation method. This paper analyzes the effect of rail straightness on the displacement measurement. The proposed method, through experimental verification, can improve the measurement uncertainty of large-scale laser comparator.

  11. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  12. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  13. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  14. Voids in the Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    El-Ad, Hagai; Piran, Tsvi

    1997-12-01

    Voids are the most prominent feature of the large-scale structure of the universe. Still, their incorporation into quantitative analysis of it has been relatively recent, owing essentially to the lack of an objective tool to identify the voids and to quantify them. To overcome this, we present here the VOID FINDER algorithm, a novel tool for objectively quantifying voids in the galaxy distribution. The algorithm first classifies galaxies as either wall galaxies or field galaxies. Then, it identifies voids in the wall-galaxy distribution. Voids are defined as continuous volumes that do not contain any wall galaxies. The voids must be thicker than an adjustable limit, which is refined in successive iterations. In this way, we identify the same regions that would be recognized as voids by the eye. Small breaches in the walls are ignored, avoiding artificial connections between neighboring voids. We test the algorithm using Voronoi tesselations. By appropriate scaling of the parameters with the selection function, we apply it to two redshift surveys, the dense SSRS2 and the full-sky IRAS 1.2 Jy. Both surveys show similar properties: ~50% of the volume is filled by voids. The voids have a scale of at least 40 h-1 Mpc and an average -0.9 underdensity. Faint galaxies do not fill the voids, but they do populate them more than bright ones. These results suggest that both optically and IRAS-selected galaxies delineate the same large-scale structure. Comparison with the recovered mass distribution further suggests that the observed voids in the galaxy distribution correspond well to underdense regions in the mass distribution. This confirms the gravitational origin of the voids.

  15. Pinstripe: a suite of programs for integrating transcriptomic and proteomic datasets identifies novel proteins and improves differentiation of protein-coding and non-coding genes.

    PubMed

    Gascoigne, Dennis K; Cheetham, Seth W; Cattenoz, Pierre B; Clark, Michael B; Amaral, Paulo P; Taft, Ryan J; Wilhelm, Dagmar; Dinger, Marcel E; Mattick, John S

    2012-12-01

    Comparing transcriptomic data with proteomic data to identify protein-coding sequences is a long-standing challenge in molecular biology, one that is exacerbated by the increasing size of high-throughput datasets. To address this challenge, and thereby to improve the quality of genome annotation and understanding of genome biology, we have developed an integrated suite of programs, called Pinstripe. We demonstrate its application, utility and discovery power using transcriptomic and proteomic data from publicly available datasets. To demonstrate the efficacy of Pinstripe for large-scale analysis, we applied Pinstripe's reverse peptide mapping pipeline to a transcript library including de novo assembled transcriptomes from the human Illumina Body Atlas (IBA2) and GENCODE v10 gene annotations, and the EBI Proteomics Identifications Database (PRIDE) peptide database. This analysis identified 736 canonical open reading frames (ORFs) supported by three or more PRIDE peptide fragments that are positioned outside any known coding DNA sequence (CDS). Because of the unfiltered nature of the PRIDE database and high probability of false discovery, we further refined this list using independent evidence for translation, including the presence of a Kozak sequence or functional domains, synonymous/non-synonymous substitution ratios and ORF length. Using this integrative approach, we observed evidence of translation from a previously unknown let7e primary transcript, the archetypical lncRNA H19, and a homolog of RD3. Reciprocally, by exclusion of transcripts with mapped peptides or significant ORFs (>80 codon), we identify 32 187 loci with RNAs longer than 2000 nt that are unlikely to encode proteins. Pinstripe (pinstripe.matticklab.com) is freely available as source code or a Mono binary. Pinstripe is written in C# and runs under the Mono framework on Linux or Mac OS X, and both under Mono and .Net under Windows. m.dinger@garvan.org.au or j.mattick@garvan.org.au Supplementary

  16. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  17. Management of large-scale multimedia conferencing

    NASA Astrophysics Data System (ADS)

    Cidon, Israel; Nachum, Youval

    1998-12-01

    The goal of this work is to explore management strategies and algorithms for large-scale multimedia conferencing over a communication network. Since the use of multimedia conferencing is still limited, the management of such systems has not yet been studied in depth. A well organized and human friendly multimedia conference management should utilize efficiently and fairly its limited resources as well as take into account the requirements of the conference participants. The ability of the management to enforce fair policies and to quickly take into account the participants preferences may even lead to a conference environment that is more pleasant and more effective than a similar face to face meeting. We suggest several principles for defining and solving resource sharing problems in this context. The conference resources which are addressed in this paper are the bandwidth (conference network capacity), time (participants' scheduling) and limitations of audio and visual equipment. The participants' requirements for these resources are defined and translated in terms of Quality of Service requirements and the fairness criteria.

  18. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  19. Large-scale tides in general relativity

    NASA Astrophysics Data System (ADS)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  20. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  1. Large scale structure of the sun's corona

    NASA Astrophysics Data System (ADS)

    Kundu, Mukul R.

    Results concerning the large-scale structure of the solar corona obtained by observations at meter-decameter wavelengths are reviewed. Coronal holes observed on the disk at multiple frequencies show the radial and azimuthal geometry of the hole. At the base of the hole there is good correspondence to the chromospheric signature in He I 10,830 A, but at greater heights the hole may show departures from symmetry. Two-dimensional imaging of weak-type III bursts simultaneously with the HAO SMM coronagraph/polarimeter measurements indicate that these bursts occur along elongated features emanating from the quiet sun, corresponding in position angle to the bright coronal streamers. It is shown that the densest regions of streamers and the regions of maximum intensity of type II bursts coincide closely. Non-flare-associated type II/type IV bursts associated with coronal streamer disruption events are studied along with correlated type II burst emissions originating from distant centers on the sun.

  2. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  3. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  4. Large-scale autostereoscopic outdoor display

    NASA Astrophysics Data System (ADS)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  5. Large Scale EOF Analysis of Climate Data

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  6. Numerical Modeling for Large Scale Hydrothermal System

    NASA Astrophysics Data System (ADS)

    Sohrabi, Reza; Jansen, Gunnar; Malvoisin, Benjamin; Mazzini, Adriano; Miller, Stephen A.

    2017-04-01

    Moderate-to-high enthalpy systems are driven by multiphase and multicomponent processes, fluid and rock mechanics, and heat transport processes, all of which present challenges in developing realistic numerical models of the underlying physics. The objective of this work is to present an approach, and some initial results, for modeling and understanding dynamics of the birth of large scale hydrothermal systems. Numerical modeling of such complex systems must take into account a variety of coupled thermal, hydraulic, mechanical and chemical processes, which is numerically challenging. To provide first estimates of the behavior of this deep complex systems, geological structures must be constrained, and the fluid dynamics, mechanics and the heat transport need to be investigated in three dimensions. Modeling these processes numerically at adequate resolution and reasonable computation times requires a suite of tools that we are developing and/or utilizing to investigate such systems. Our long-term goal is to develop 3D numerical models, based on a geological models, which couples mechanics with the hydraulics and thermal processes driving hydrothermal system. Our first results from the Lusi hydrothermal system in East Java, Indonesia provide a basis for more sophisticated studies, eventually in 3D, and we introduce a workflow necessary to achieve these objectives. Future work focuses with the aim and parallelization suitable for High Performance Computing (HPC). Such developments are necessary to achieve high-resolution simulations to more fully understand the complex dynamics of hydrothermal systems.

  7. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  8. Large-Scale SRM Screen of Urothelial Bladder Cancer Candidate Biomarkers in Urine.

    PubMed

    Duriez, Elodie; Masselon, Christophe D; Mesmin, Cédric; Court, Magali; Demeure, Kevin; Allory, Yves; Malats, Núria; Matondo, Mariette; Radvanyi, François; Garin, Jérôme; Domon, Bruno

    2017-04-07

    Urothelial bladder cancer is a condition associated with high recurrence and substantial morbidity and mortality. Noninvasive urinary tests that would detect bladder cancer and tumor recurrence are required to significantly improve patient care. Over the past decade, numerous bladder cancer candidate biomarkers have been identified in the context of extensive proteomics or transcriptomics studies. To translate these findings in clinically useful biomarkers, the systematic evaluation of these candidates remains the bottleneck. Such evaluation involves large-scale quantitative LC-SRM (liquid chromatography-selected reaction monitoring) measurements, targeting hundreds of signature peptides by monitoring thousands of transitions in a single analysis. The design of highly multiplexed SRM analyses is driven by several factors: throughput, robustness, selectivity and sensitivity. Because of the complexity of the samples to be analyzed, some measurements (transitions) can be interfered by coeluting isobaric species resulting in biased or inconsistent estimated peptide/protein levels. Thus the assessment of the quality of SRM data is critical to allow flagging these inconsistent data. We describe an efficient and robust method to process large SRM data sets, including the processing of the raw data, the detection of low-quality measurements, the normalization of the signals for each protein, and the estimation of protein levels. Using this methodology, a variety of proteins previously associated with bladder cancer have been assessed through the analysis of urine samples from a large cohort of cancer patients and corresponding controls in an effort to establish a priority list of most promising candidates to guide subsequent clinical validation studies.

  9. Potential for geophysical experiments in large scale tests

    SciTech Connect

    Dieterich, J.H.

    1981-07-01

    Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (<40 MPa) bi-axial apparatus demonstrate that a minimum fault length is required to generate confined shear instabilities along pre-existing faults. Experimental analysis of source interactions for simulated earthquakes consisting of confined shear instabilities on a fault with gouge appears to require large specimens (approx.1m) and high confining pressures (>100 MPa).

  10. Nanoscale Proteomics

    SciTech Connect

    Shen, Yufeng; Tolic, Nikola; Masselon, Christophe D.; Pasa-Tolic, Liljiana; Camp, David G.; Anderson, Gordon A.; Smith, Richard D.; Lipton, Mary S.

    2004-02-01

    This paper describes efforts to develop a liquid chromatography (LC)/mass spectrometry (MS) technology for ultra-sensitive proteomics studies, i.e. nanoscale proteomics. The approach combines high-efficiency nano-scale LC with advanced MS, including high sensitivity and high resolution Fourier transform ion cyclotron resonance (FTICR) MS, to perform both single-stage MS and tandem MS (MS/MS) proteomic analyses. The technology developed enables large-scale protein identification from nanogram size proteomic samples and characterization of more abundant proteins from sub-picogram size complex samples. Protein identification in such studies using MS is feasible from <75 zeptomole of a protein, and the average proteome measurement throughput is >200 proteins/h and ~3 h/sample. Higher throughput (>1000 proteins/h) and more sensitive detection limits can be obtained using a “accurate mass and time” tag approach developed at our laboratory. These capabilities lay the foundation for studies from single or limited numbers of cells.

  11. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  12. Differential proteomics analysis of Bacillus amyloliquefaciens and its genome-shuffled mutant for improving surfactin production.

    PubMed

    Zhao, Junfeng; Cao, Lin; Zhang, Chong; Zhong, Lei; Lu, Jing; Lu, Zhaoxin

    2014-10-31

    Genome shuffling technology was used as a novel whole-genome engineering approach to rapidly improve the antimicrobial lipopeptide yield of Bacillus amyloliquefaciens. Comparative proteomic analysis of the parental ES-2-4 and genome-shuffled FMB38 strains was conducted to examine the differentially expressed proteins. The proteome was separated by 2-DE (two dimensional electrophoresis) and analyzed by MS (mass spectrum). In the shuffled strain FMB38, 51 differentially expressed protein spots with higher than two-fold spot density were detected by gel image comparison. Forty-six protein spots were detectable by silver staining and further MS analysis. The results demonstrated that among the 46 protein spots expressed particularly induced in the genome-shuffled mutant, 15 were related to metabolism, five to DNA replication, recombination and repair, six to translation and post-translational modifications, one to cell secretion and signal transduction mechanisms, three to surfactin synthesis, two to energy production and conversion, and 14 to others. All these indicated that the metabolic capability of the mutant was improved by the genome shuffling. The study will enable future detailed investigation of gene expression and function linked with surfactin synthesis. The results of proteome analysis may provide information for metabolic engineering of Bacillus amyloliquefaciens for overproduction of surfactin.

  13. Analysis and Management of Large-Scale Activities Based on Interface

    NASA Astrophysics Data System (ADS)

    Yang, Shaofan; Ji, Jingwei; Lu, Ligang; Wang, Zhiyi

    Based on the concepts of system safety engineering, life-cycle and interface that comes from American system safety standard MIL-STD-882E, and apply them to the process of risk analysis and management of large-scale activities. Identify the involved personnel, departments, funds and other contents throughout the life cycle of large-scale activities. Recognize and classify the ultimate risk sources of people, objects and environment of large-scale activities from the perspective of interface. Put forward the accident cause analysis model according to the previous large-scale activities' accidents and combine with the analysis of the risk source interface. Analyze the risks of each interface and summary various types of risks the large-scale activities faced. Come up with the risk management consciousness, policies and regulations, risk control and supervision departments improvement ideas.

  14. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  15. The School Principal's Role in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Newton, Paul; Tunison, Scott; Viczko, Melody

    2010-01-01

    This paper reports on an interpretive study in which 25 elementary principals were asked about their assessment knowledge, the use of large-scale assessments in their schools, and principals' perceptions on their roles with respect to large-scale assessments. Principals in this study suggested that the current context of large-scale assessment and…

  16. Synchronization of coupled large-scale Boolean networks

    NASA Astrophysics Data System (ADS)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  17. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  18. Exact-Differential Large-Scale Traffic Simulation

    SciTech Connect

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios; Perumalla, Kalyan S

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) a key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.

  19. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  20. Soybean seed proteome rebalancing

    PubMed Central

    Herman, Eliot M.

    2014-01-01

    The soybean seed’s protein content and composition are regulated by both genetics and physiology. Overt seed protein content is specified by the genotype’s genetic framework and is selectable as a breeding trait. Within the genotype-specified protein content phenotype soybeans have the capacity to rebalance protein composition to create differing proteomes. Soybeans possess a relatively standardized proteome, but mutation or targeted engineering can induce large-scale proteome rebalancing. Proteome rebalancing shows that the output traits of seed content and composition result from two major types of regulation: genotype and post-transcriptional control of the proteome composition. Understanding the underlying mechanisms that specifies the seed proteome can enable engineering new phenotypes for the production of a high-quality plant protein source for food, feed, and industrial proteins. PMID:25232359

  1. Large scale dynamics of protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  2. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  3. Large scale simulations of Brownian suspensions

    NASA Astrophysics Data System (ADS)

    Viera, Marc Nathaniel

    Particle suspensions occur in a wide variety of natural and engineering materials. Some examples are colloids, polymers, paints, and slurries. These materials exhibit complex behavior owing to the forces which act among the particles and are transmitted through the fluid medium. Depending on the application, particle sizes range from large macroscopic molecules of 100mum to smaller colloidal particles in the range of 10nm to 1mum. Particles of this size interact though interparticle forces such as electrostatic and van der Waals, as well as hydrodynamic forces transmitted through the fluid medium. Additionally, the particles are subjected to random thermal fluctuations in the fluid giving rise to Brownian motion. The central objective of our research is to develop efficient numerical algorithms for the large scale dynamic simulation of particle suspensions. While previous methods have incurred a computational cost of O(N3), where N is the number of particles, we have developed a novel algorithm capable of solving this problem in O(N ln N) operations. This has allowed us to perform dynamic simulations with up to 64,000 particles and Monte Carlo realizations of up to 1 million particles. Our algorithm follows a Stokesian dynamics formulation by evaluating many-body hydrodynamic interactions using a far-field multipole expansion combined with a near-field lubrication correction. The breakthrough O(N ln N) scaling is obtained by employing a Particle-Mesh-Ewald (PME) approach whereby near-field interactions are evaluated directly and far-field interactions are evaluated using a grid based velocity computed with FFT's. This approach is readily extended to include the effects of Brownian motion. For interacting particles, the fluctuation-dissipation theorem requires that the individual Brownian forces satisfy a correlation based on the N body resistance tensor R. The accurate modeling of these forces requires the computation of a matrix square root R 1/2 for matrices up

  4. Proteomics of Saccharomyces cerevisiae Organelles*

    PubMed Central

    Wiederhold, Elena; Veenhoff, Liesbeth M.; Poolman, Bert; Slotboom, Dirk Jan

    2010-01-01

    Knowledge of the subcellular localization of proteins is indispensable to understand their physiological roles. In the past decade, 18 studies have been performed to analyze the protein content of isolated organelles from Saccharomyces cerevisiae. Here, we integrate the data sets and compare them with other large scale studies on protein localization and abundance. We evaluate the completeness and reliability of the organelle proteomics studies. Reliability depends on the purity of the organelle preparations, which unavoidably contain (small) amounts of contaminants from different locations. Quantitative proteomics methods can be used to distinguish between true organellar constituents and contaminants. Completeness is compromised when loosely or dynamically associated proteins are lost during organelle preparation and also depends on the sensitivity of the analytical methods for protein detection. There is a clear trend in the data from the 18 organelle proteomics studies showing that proteins of low abundance frequently escape detection. Proteins with unknown function or cellular abundance are also infrequently detected, indicating that these proteins may not be expressed under the conditions used. We discuss that the yeast organelle proteomics studies provide powerful lead data for further detailed studies and that methodological advances in organelle preparation and in protein detection may help to improve the completeness and reliability of the data. PMID:19955081

  5. Improved metabolites of pharmaceutical ingredient grade Ginkgo biloba and the correlated proteomics analysis.

    PubMed

    Zheng, Wen; Li, Ximin; Zhang, Lin; Zhang, Yanzhen; Lu, Xiaoping; Tian, Jingkui

    2015-06-01

    Ginkgo biloba is an attractive and traditional medicinal plant, and has been widely used as a phytomedicine in the prevention and treatment of cardiovascular and cerebrovascular diseases. Flavonoids and terpene lactones are the major bioactive components of Ginkgo, whereas the ginkgolic acids (GAs) with strong allergenic properties are strictly controlled. In this study, we tested the content of flavonoids and GAs under ultraviolet-B (UV-B) treatment and performed comparative proteomic analyses to determine the differential proteins that occur upon UV-B radiation. That might play a crucial role in producing flavonoids and GAs. Our phytochemical analyses demonstrated that UV-B irradiation significantly increased the content of active flavonoids, and decreased the content of toxic GAs. We conducted comparative proteomic analysis of both whole leaf and chloroplasts proteins. In total, 27 differential proteins in the whole leaf and 43 differential proteins in the chloroplast were positively identified and functionally annotated. The proteomic data suggested that enhanced UV-B radiation exposure activated antioxidants and stress-responsive proteins as well as reduced the rate of photosynthesis. We demonstrate that UV-B irradiation pharmaceutically improved the metabolic ingredients of Ginkgo, particularly in terms of reducing GAs. With high UV absorption properties, and antioxidant activities, the flavonoids were likely highly induced as protective molecules following UV-B irradiation.

  6. A comparison of the effectiveness of three parenting programmes in improving parenting skills, parent mental-well being and children's behaviour when implemented on a large scale in community settings in 18 English local authorities: the parenting early intervention pathfinder (PEIP)

    PubMed Central

    2011-01-01

    Background There is growing evidence that parenting programmes can improve parenting skills and thereby the behaviour of children exhibiting or at risk of developing antisocial behaviour. Given the high prevalence of childhood behaviour problems the task is to develop large scale application of effective programmes. The aim of this study was to evaluate the UK government funded implementation of the Parenting Early Intervention Pathfinder (PEIP). This involved the large scale rolling out of three programmes to parents of children 8-13 years in 18 local authorities (LAs) over a 2 year period. Methods The UK government's Department for Education allocated each programme (Incredible Years, Triple P and Strengthening Families Strengthening Communities) to six LAs which then developed systems to intervene using parenting groups. Implementation fidelity was supported by the training of group facilitators by staff of the appropriate parenting programme supplemented by supervision. Parents completed measures of parenting style, efficacy, satisfaction, and mental well-being, and also child behaviour. Results A total of 1121 parents completed pre- and post-course measures. There were significant improvements on all measures for each programme; effect sizes (Cohen's d) ranged across the programmes from 0.57 to 0.93 for parenting style; 0.33 to 0.77 for parenting satisfaction and self-efficacy; and from 0.49 to 0.88 for parental mental well-being. Effectiveness varied between programmes: Strengthening Families Strengthening Communities was significantly less effective than both the other two programmes in improving parental efficacy, satisfaction and mental well-being. Improvements in child behaviour were found for all programmes: effect sizes for reduction in conduct problems ranged from -0.44 to -0.71 across programmes, with Strengthening Families Strengthening Communities again having significantly lower reductions than Incredible Years. Conclusions Evidence-based parenting

  7. A comparison of the effectiveness of three parenting programmes in improving parenting skills, parent mental-well being and children's behaviour when implemented on a large scale in community settings in 18 English local authorities: the parenting early intervention pathfinder (PEIP).

    PubMed

    Lindsay, Geoff; Strand, Steve; Davis, Hilton

    2011-12-30

    There is growing evidence that parenting programmes can improve parenting skills and thereby the behaviour of children exhibiting or at risk of developing antisocial behaviour. Given the high prevalence of childhood behaviour problems the task is to develop large scale application of effective programmes. The aim of this study was to evaluate the UK government funded implementation of the Parenting Early Intervention Pathfinder (PEIP). This involved the large scale rolling out of three programmes to parents of children 8-13 years in 18 local authorities (LAs) over a 2 year period. The UK government's Department for Education allocated each programme (Incredible Years, Triple P and Strengthening Families Strengthening Communities) to six LAs which then developed systems to intervene using parenting groups. Implementation fidelity was supported by the training of group facilitators by staff of the appropriate parenting programme supplemented by supervision. Parents completed measures of parenting style, efficacy, satisfaction, and mental well-being, and also child behaviour. A total of 1121 parents completed pre- and post-course measures. There were significant improvements on all measures for each programme; effect sizes (Cohen's d) ranged across the programmes from 0.57 to 0.93 for parenting style; 0.33 to 0.77 for parenting satisfaction and self-efficacy; and from 0.49 to 0.88 for parental mental well-being. Effectiveness varied between programmes: Strengthening Families Strengthening Communities was significantly less effective than both the other two programmes in improving parental efficacy, satisfaction and mental well-being. Improvements in child behaviour were found for all programmes: effect sizes for reduction in conduct problems ranged from -0.44 to -0.71 across programmes, with Strengthening Families Strengthening Communities again having significantly lower reductions than Incredible Years. Evidence-based parenting programmes can be implemented

  8. Recent Developments in Quantitative Proteomics

    PubMed Central

    Becker, Christopher H.; Bern, Marshall

    2010-01-01

    Proteomics is the study of proteins on a large scale, encompassing the many interests scientists and physicians have in their expression and physical properties. Proteomics continues to be a rapidly expanding field, with a wealth of reports regularly appearing on technology enhancements and scientific studies using these new tools. This review focuses primarily on the quantitative aspect of protein expression and the associated computational machinery for making large-scale identifications of proteins and their post-translational modifications. The primary emphasis is on the combination of liquid chromatography-mass spectrometry (LC-MS) methods and associated tandem mass spectrometry (LC-MS/MS). Tandem mass spectrometry, or MS/MS, involves a second analysis within the instrument after a molecular dissociative event in order to obtain structural information including but not limited to sequence information. This review further focuses primarily on the study of in vitro digested proteins known as bottom-up or shotgun proteomics. A brief discussion of recent instrumental improvements precedes a discussion on affinity enrichment and depletion of proteins, followed by a review of the major approaches (label-free and isotope-labeling) to making protein expression measurements quantitative, especially in the context of profiling large numbers of proteins. Then a discussion follows on the various computational techniques used to identify peptides and proteins from LC-MS/MS data. This review article then includes a short discussion of LC-MS approaches to three-dimensional structure determination and concludes with a section on statistics and data mining for proteomics, including comments on properly powering clinical studies and avoiding over-fitting with large data sets. PMID:20620221

  9. Recent developments in quantitative proteomics.

    PubMed

    Becker, Christopher H; Bern, Marshall

    2011-06-17

    Proteomics is the study of proteins on a large scale, encompassing the many interests scientists and physicians have in their expression and physical properties. Proteomics continues to be a rapidly expanding field, with a wealth of reports regularly appearing on technology enhancements and scientific studies using these new tools. This review focuses primarily on the quantitative aspect of protein expression and the associated computational machinery for making large-scale identifications of proteins and their post-translational modifications. The primary emphasis is on the combination of liquid chromatography-mass spectrometry (LC-MS) methods and associated tandem mass spectrometry (LC-MS/MS). Tandem mass spectrometry, or MS/MS, involves a second analysis within the instrument after a molecular dissociative event in order to obtain structural information including but not limited to sequence information. This review further focuses primarily on the study of in vitro digested proteins known as bottom-up or shotgun proteomics. A brief discussion of recent instrumental improvements precedes a discussion on affinity enrichment and depletion of proteins, followed by a review of the major approaches (label-free and isotope-labeling) to making protein expression measurements quantitative, especially in the context of profiling large numbers of proteins. Then a discussion follows on the various computational techniques used to identify peptides and proteins from LC-MS/MS data. This review article then includes a short discussion of LC-MS approaches to three-dimensional structure determination and concludes with a section on statistics and data mining for proteomics, including comments on properly powering clinical studies and avoiding over-fitting with large data sets. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  11. Large-scale Fractal Motion of Clouds

    NASA Image and Video Library

    2017-09-27

    waters surrounding the island.) The “swallowed” gulps of clear island air get carried along within the vortices, but these are soon mixed into the surrounding clouds. Landsat is unique in its ability to image both the small-scale eddies that mix clear and cloudy air, down to the 30 meter pixel size of Landsat, but also having a wide enough field-of-view, 180 km, to reveal the connection of the turbulence to large-scale flows such as the subtropical oceanic gyres. Landsat 7, with its new onboard digital recorder, has extended this capability away from the few Landsat ground stations to remote areas such as Alejandro Island, and thus is gradually providing a global dynamic picture of evolving human-scale phenomena. For more details on von Karman vortices, refer to climate.gsfc.nasa.gov/~cahalan. Image and caption courtesy Bob Cahalan, NASA GSFC Instrument: Landsat 7 - ETM+ Credit: NASA/GSFC/Landsat NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  12. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  13. Brief Mental Training Reorganizes Large-Scale Brain Networks

    PubMed Central

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A.

    2017-01-01

    Emerging evidences have shown that one form of mental training—mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training—integrative body–mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing. PMID:28293180

  14. Brief Mental Training Reorganizes Large-Scale Brain Networks.

    PubMed

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A

    2017-01-01

    Emerging evidences have shown that one form of mental training-mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training-integrative body-mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing.

  15. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  16. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  17. An Approach to Measuring the Performance of a Large-Scale Collaboration

    NASA Astrophysics Data System (ADS)

    Beckett, Ronald C.

    Large-scale collaborations such as business networks and clusters are being promoted worldwide, but some OECD studies suggest that measuring the performance of such collaborations can be problematic. In this paper a grounded theory approach leads to the proposition that important attributes of a large-scale collaboration are its dimensions, maturity and relative heterogeneity of participants; whilst critical outcomes from a large-scale collaboration initiative are balanced housekeeping/beneficial transactions and improved market access/competitiveness. This proposition is used to demonstrate business process frameworks for characterizing and measuring the performance of such collaborations.

  18. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  19. A First Look at the Head Start CARES Demonstration: Large-Scale Implementation of Programs to Improve Children's Social-Emotional Competence. OPRE Report 2013-47

    ERIC Educational Resources Information Center

    Mattera, Shira Kolnik; Lloyd, Chrishana M.; Fishman, Mike; Bangser, Michael

    2013-01-01

    Low-income preschool children face many risks to their social-emotional development that can affect their school experience and social outcomes for years to come. Although there are some promising approaches to improving young children's social-emotional competence, the evidence base is limited, particularly on the effectiveness of these…

  20. Evaluation of three-dimensional gel electrophoresis to improve quantitative profiling of complex proteomes.

    PubMed

    Colignon, Bertrand; Raes, Martine; Dieu, Marc; Delaive, Edouard; Mauro, Sergio

    2013-07-01

    Two-dimensional remains one of the main experimental approaches in proteome analysis. However, comigration of protein leads to several limitations: lack of accuracy in protein identification, impaired comparative quantification, and PTM detection. We have optimized a third additional step of in-gel separation to alleviate comigration associated drawbacks. Spot resolution is strikingly improved following this simple and rapid method and the positive impact on protein and peptide identification from MS/MS data, on the analysis of relative changes in protein abundance, and on the detection of PTM is described.

  1. Improved Proteomic Analysis Following Trichloroacetic Acid Extraction of Bacillus anthracis Spore Proteins

    SciTech Connect

    Kaiser, Brooke LD; Wunschel, David S.; Sydor, Michael A.; Warner, Marvin G.; Wahl, Karen L.; Hutchison, Janine R.

    2015-08-07

    Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Proteomic analysis is dependent upon efficient extraction of proteins from bacterial samples without introducing bias toward extraction of particular protein classes. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrich for certain classes of proteins. The method presented here is technically simple and does not require specialized equipment such as a mechanical disrupter. Our data reveal that for particularly challenging samples, such as B. anthracis Sterne spores, trichloroacetic acid extraction improved the number of proteins identified within a sample compared to bead beating (714 vs 660, respectively). Further, TCA extraction enriched for 103 known spore specific proteins whereas bead beating resulted in 49 unique proteins. Analysis of C. botulinum samples grown to 5 days, composed of vegetative biomass and spores, showed a similar trend with improved protein yields and identification using our method compared to bead beating. Interestingly, easily lysed samples, such as B. anthracis vegetative cells, were equally as effectively processed via TCA and bead beating, but TCA extraction remains the easiest and most cost effective option. As with all assays, supplemental methods such as implementation of an alternative preparation method may provide additional insight to the protein biology of the bacteria being studied.

  2. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  3. Urine proteomics for discovery of improved diagnostic markers of Kawasaki disease

    PubMed Central

    Kentsis, Alex; Shulman, Andrew; Ahmed, Saima; Brennan, Eileen; Monuteaux, Michael C; Lee, Young-Ho; Lipsett, Susan; Paulo, Joao A; Dedeoglu, Fatma; Fuhlbrigge, Robert; Bachur, Richard; Bradwin, Gary; Arditi, Moshe; Sundel, Robert P; Newburger, Jane W; Steen, Hanno; Kim, Susan

    2013-01-01

    Kawasaki disease (KD) is a systemic vasculitis of unknown etiology. Absence of definitive diagnostic markers limits the accuracy of clinical evaluations of suspected KD with significant increases in morbidity. In turn, incomplete understanding of its molecular pathogenesis hinders the identification of rational targets needed to improve therapy. We used high-accuracy mass spectrometry proteomics to analyse over 2000 unique proteins in clinical urine specimens of patients with KD. We discovered that urine proteomes of patients with KD, but not those with mimicking conditions, were enriched for markers of cellular injury such as filamin and talin, immune regulators such as complement regulator CSMD3, immune pattern recognition receptor muclin, and immune cytokine protease meprin A. Significant elevations of filamin C and meprin A were detected in both the serum and urine in two independent cohorts of patients with KD, comprised of a total of 236 patients. Meprin A and filamin C exhibited superior diagnostic performance as compared to currently used markers of disease in a blinded case-control study of 107 patients with suspected KD, with receiver operating characteristic areas under the curve of 0.98 (95% confidence intervals [CI] of 0.97–1 and 0.95–1, respectively). Notably, meprin A was enriched in the coronary artery lesions of a mouse model of KD. In all, urine proteome profiles revealed novel candidate molecular markers of KD, including filamin C and meprin A that exhibit excellent diagnostic performance. These disease markers may improve the diagnostic accuracy of clinical evaluations of children with suspected KD, lead to the identification of novel therapeutic targets, and allow the development of a biological classification of Kawasaki disease. PMID:23281308

  4. A large-scale evaluation of computational protein function prediction.

    PubMed

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-03-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.

  5. Exploiting large-scale correlations to detect continuous gravitational waves.

    PubMed

    Pletsch, Holger J; Allen, Bruce

    2009-10-30

    Fully coherent searches (over realistic ranges of parameter space and year-long observation times) for unknown sources of continuous gravitational waves are computationally prohibitive. Less expensive hierarchical searches divide the data into shorter segments which are analyzed coherently, then detection statistics from different segments are combined incoherently. The novel method presented here solves the long-standing problem of how best to do the incoherent combination. The optimal solution exploits large-scale parameter-space correlations in the coherent detection statistic. Application to simulated data shows dramatic sensitivity improvements compared with previously available (ad hoc) methods, increasing the spatial volume probed by more than 2 orders of magnitude at lower computational cost.

  6. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  7. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors

    PubMed Central

    Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver

    2015-01-01

    Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878

  8. Large scale ocean circulation from the GRACE GGM01 Geoid

    NASA Astrophysics Data System (ADS)

    Tapley, B. D.; Chambers, D. P.; Bettadpur, S.; Ries, J. C.

    2003-11-01

    The GRACE Gravity Model 01 (GGM01), computed from 111 days of GRACE K-band ranging (KBR) data, is differenced from a global mean sea surface (MSS) computed from a decade of satellite altimetry to determine a mean dynamic ocean topography (DOT). As a test of the GGM01 gravity model, large-scale zonal and meridional surface geostrophic currents are computed from the topography and are compared with those derived from a mean hydrographic surface. Reduction in residual RMS between the two by 30-60% (and increased correlation) indicates that the GGM01 geoid represents a dramatic improvement over older geoid models, which were developed from multiple satellite tracking data, altimetry, and surface gravity measurements. For the first time, all major current systems are clearly observed in the DOT from space-based measurements.

  9. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  10. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    SciTech Connect

    Alvarez, Marcello; Baldauf, T.; Bond, J. Richard; Dalal, N.; Putter, R. D.; Dore, O.; Green, Daniel; Hirata, Chris; Huang, Zhiqi; Huterer, Dragan; Jeong, Donghui; Johnson, Matthew C.; Krause, Elisabeth; Loverde, Marilena; Meyers, Joel; Meeburg, Daniel; Senatore, Leonardo; Shandera, Sarah; Silverstein, Eva; Slosar, Anze; Smith, Kendrick; Zaldarriaga, Matias; Assassi, Valentin; Braden, Jonathan; Hajian, Amir; Kobayashi, Takeshi; Stein, George; Engelen, Alexander van

    2014-12-15

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude f$loc\\atop{NL}$ (f$eq\\atop{NL}$), natural target levels of sensitivity are Δf$loc, eq\\atop{NL}$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.

  11. Galaxy clustering and the origin of large-scale flows

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, R.; Yahil, A.

    1989-01-01

    Peebles's 'cosmic virial theorem' is extended from its original range of validity at small separations, where hydrostatic equilibrium holds, to large separations, in which linear gravitational stability theory applies. The rms pairwise velocity difference at separation r is shown to depend on the spatial galaxy correlation function xi(x) only for x less than r. Gravitational instability theory can therefore be tested by comparing the two up to the maximum separation for which both can reliably be determined, and there is no dependence on the poorly known large-scale density and velocity fields. With the expected improvement in the data over the next few years, however, this method should yield a reliable determination of omega.

  12. A large-scale computer facility for computational aerodynamics

    SciTech Connect

    Bailey, F.R.; Balhaus, W.F.

    1985-02-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans.

  13. Studies on Editing Patterns in Large-scale Wikis

    NASA Astrophysics Data System (ADS)

    Boulain, Philip; Shadbolt, Nigel; Gibbins, Nicholas

    Wiki systems have developed over the past years as lightweight, community-editable, web-based hypertext systems. With the emergence of Semantic Wikis, these collections of interlinked documents have also gained a dual role as ad-hoc RDF graphs. However, their roots lie at the limited hypertext capabilities of the World Wide Web: embedded links, without support for composite objects or transclusion. In this chapter, we present experimental evidence that hyperstructure changes, as opposed to content changes, form a substantial proportion of editing effort on a large-scale wiki.We then follow this with a in-detail experiment, studying how individual editors work to edit articles on the wiki. These experiments are set in the wider context of a study of how the technologies developed during decades of hypertext research may be applied to improve management of wiki document structure and, with semantic wikis, knowledge structure.

  14. Experiment study of large-scale magnetorheological fluid damper

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Li, Jinhai; Ou, Jinping

    2005-05-01

    Due to their character of low power requirement, rapid-response and large force, the dampers that made based on the special rheologic performance of magnetorheological fluid (MRF) have shown to be one kind of ideal semi-active vibration control devices for civil engineering structures and vehicles. In this paper, the character of magnetic circuit of MRF damper was firstly studied; based on above results, a large-scale MRF damper whose adjustable multiple is about 16 and maximum damping force is about 170kN was then designed and tested. Experimental results show that, under lower electrical current, same or opposite of electric current direction of multi-coils winding on the piston do not influence damping performance of MRF damper; however, under higher electrical current, inverse connecting of adjacent coils is apt to improve damping force of MRF damper.

  15. Large-scale elucidation of drug response pathways in humans.

    PubMed

    Silberberg, Yael; Gottlieb, Assaf; Kupiec, Martin; Ruppin, Eytan; Sharan, Roded

    2012-02-01

    Elucidating signaling pathways is a fundamental step in understanding cellular processes and developing new therapeutic strategies. Here we introduce a method for the large-scale elucidation of signaling pathways involved in cellular response to drugs. Combining drug targets, drug response expression profiles, and the human physical interaction network, we infer 99 human drug response pathways and study their properties. Based on the newly inferred pathways, we develop a pathway-based drug-drug similarity measure and compare it to two common, gold standard drug-drug similarity measures. Remarkably, our measure provides better correspondence to these gold standards than similarity measures that are based on associations between drugs and known pathways, or on drug-specific gene expression profiles. It further improves the prediction of drug side effects and indications, elucidating specific response pathways that may be associated with these drug properties. Supplementary Material for this article is available at www.liebertonline.com/cmb.

  16. Contractual Duration and Investment Incentives: Evidence from Large Scale Production Units in China

    NASA Astrophysics Data System (ADS)

    Li, Fang; Feng, Shuyi; D'Haese, Marijke; Lu, Hualiang; Qu, Futian

    2017-04-01

    Large Scale Production Units have become important forces in the supply of agricultural commodities and agricultural modernization in China. Contractual duration in farmland transfer to Large Scale Production Units can be considered to reflect land tenure security. Theoretically, long-term tenancy contracts can encourage Large Scale Production Units to increase long-term investments by ensuring land rights stability or favoring access to credit. Using a unique Large Scale Production Units- and plot-level field survey dataset from Jiangsu and Jiangxi Province, this study aims to examine the effect of contractual duration on Large Scale Production Units' soil conservation behaviours. IV method is applied to take into account the endogeneity of contractual duration and unobserved household heterogeneity. Results indicate that farmland transfer contract duration significantly and positively affects land-improving investments. Policies aimed at improving transaction platforms and intermediary organizations in farmland transfer to facilitate Large Scale Production Units to access farmland with long-term tenancy contracts may therefore play an important role in improving soil quality and land productivity.

  17. Planck data versus large scale structure: Methods to quantify discordance

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Battye, Richard A.; Moss, Adam

    2017-06-01

    Discordance in the Λ cold dark matter cosmological model can be seen by comparing parameters constrained by cosmic microwave background (CMB) measurements to those inferred by probes of large scale structure. Recent improvements in observations, including final data releases from both Planck and SDSS-III BOSS, as well as improved astrophysical uncertainty analysis of CFHTLenS, allows for an update in the quantification of any tension between large and small scales. This paper is intended, primarily, as a discussion on the quantifications of discordance when comparing the parameter constraints of a model when given two different data sets. We consider Kullback-Leibler divergence, comparison of Bayesian evidences and other statistics which are sensitive to the mean, variance and shape of the distributions. However, as a byproduct, we present an update to the similar analysis in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508], where we find that, considering new data and treatment of priors, the constraints from the CMB and from a combination of large scale structure (LSS) probes are in greater agreement and any tension only persists to a minor degree. In particular, we find the parameter constraints from the combination of LSS probes which are most discrepant with the Planck 2015 +Pol +BAO parameter distributions can be quantified at a ˜2.55 σ tension using the method introduced in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508]. If instead we use the distributions constrained by the combination of LSS probes which are in greatest agreement with those from Planck 2015 +Pol +BAO this tension is only 0.76 σ .

  18. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  19. A Proteomic Workflow Using High-Throughput De Novo Sequencing Towards Complementation of Genome Information for Improved Comparative Crop Science.

    PubMed

    Turetschek, Reinhard; Lyon, David; Desalegn, Getinet; Kaul, Hans-Peter; Wienkoop, Stefanie

    2016-01-01

    The proteomic study of non-model organisms, such as many crop plants, is challenging due to the lack of comprehensive genome information. Changing environmental conditions require the study and selection of adapted cultivars. Mutations, inherent to cultivars, hamper protein identification and thus considerably complicate the qualitative and quantitative comparison in large-scale systems biology approaches. With this workflow, cultivar-specific mutations are detected from high-throughput comparative MS analyses, by extracting sequence polymorphisms with de novo sequencing. Stringent criteria are suggested to filter for confidential mutations. Subsequently, these polymorphisms complement the initially used database, which is ready to use with any preferred database search algorithm. In our example, we thereby identified 26 specific mutations in two cultivars of Pisum sativum and achieved an increased number (17 %) of peptide spectrum matches.

  20. Large scale scientific computing - future directions

    NASA Astrophysics Data System (ADS)

    Patterson, G. S.

    1982-06-01

    Every new generation of scientific computers has opened up new areas of science for exploration through the use of more realistic numerical models or the ability to process ever larger amounts of data. Concomitantly, scientists, because of the success of past models and the wide range of physical phenomena left unexplored, have pressed computer designers to strive for the maximum performance that current technology will permit. This encompasses not only increased processor speed, but also substantial improvements in processor memory, I/O bandwidth, secondary storage and facilities to augment the scientist's ability both to program and to understand the results of a computation. Over the past decade, performance improvements for scientific calculations have come from algoeithm development and a major change in the underlying architecture of the hardware, not from significantly faster circuitry. It appears that this trend will continue for another decade. A future archetectural change for improved performance will most likely be multiple processors coupled together in some fashion. Because the demand for a significantly more powerful computer system comes from users with single large applications, it is essential that an application be efficiently partitionable over a set of processors; otherwise, a multiprocessor system will not be effective. This paper explores some of the constraints on multiple processor architecture posed by these large applications. In particular, the trade-offs between large numbers of slow processors and small numbers of fast processors is examined. Strategies for partitioning range from partitioning at the language statement level (in-the-small) and at the program module level (in-the-large). Some examples of partitioning in-the-large are given and a strategy for efficiently executing a partitioned program is explored.

  1. Improved proteomic analysis following trichloroacetic acid extraction of Bacillus anthracis spore proteins.

    PubMed

    Deatherage Kaiser, Brooke L; Wunschel, David S; Sydor, Michael A; Warner, Marvin G; Wahl, Karen L; Hutchison, Janine R

    2015-11-01

    Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Analysis of cellular proteins is dependent upon efficient extraction from bacterial samples, which can be challenging with increasing complexity and refractory characteristics. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrichment for certain classes of proteins. The method presented here is technically simple, does not require specialized equipment such as a mechanical disrupter, and is effective for protein extraction of the particularly challenging sample type of Bacillus anthracis Sterne spores. The ability of Trichloroacetic acid (TCA) extraction to isolate proteins from spores and enrich for spore-specific proteins was compared to the traditional mechanical disruption method of bead beating. TCA extraction improved the total average number of proteins identified within a sample as compared to bead beating (547 vs 495, respectively). Further, TCA extraction enriched for 270 spore proteins, including those typically identified by first isolating the spore coat and exosporium layers. Bead beating enriched for 156 spore proteins more typically identified from whole spore proteome analyses. The total average number of proteins identified was equal using TCA or bead beating for easily lysed samples, such as B. anthracis vegetative cells. As with all assays, supplemental methods such as implementation of an alternative preparation method may simplify sample preparation and provide additional insight to the protein biology of the organism being studied.

  2. Modulation of mitochondrial proteome and improved mitochondrial function by biventricular pacing of dyssynchronous failing hearts.

    PubMed

    Agnetti, Giulio; Kaludercic, Nina; Kane, Lesley A; Elliott, Steven T; Guo, Yurong; Chakir, Khalid; Samantapudi, Daya; Paolocci, Nazareno; Tomaselli, Gordon F; Kass, David A; Van Eyk, Jennifer E

    2010-02-01

    Cardiac resynchronization therapy (CRT) improves chamber mechanoenergetics and morbidity and mortality of patients manifesting heart failure with ventricular dyssynchrony; however, little is known about the molecular changes underlying CRT benefits. We hypothesized that mitochondria may play an important role because of their involvement in energy production. Mitochondria isolated from the left ventricle in a canine model of dyssynchronous or resynchronized (CRT) heart failure were analyzed by a classical, gel-based, proteomic approach. Two-dimensional gel electrophoresis revealed that 31 mitochondrial proteins where changed when controlling the false discovery rate at 30%. Key enzymes in anaplerotic pathways, such as pyruvate carboxylation and branched-chain amino acid oxidation, were increased. These concerted changes, along with others, suggested that CRT may increase the pool of Krebs cycle intermediates and fuel oxidative phosphorylation. Nearly 50% of observed changes pertained to subunits of the respiratory chain. ATP synthase-beta subunit of complex V was less degraded, and its phosphorylation modulated by CRT was associated with increased formation (2-fold, P=0.004) and specific activity (+20%, P=0.05) of the mature complex. The importance of these modifications was supported by coordinated changes in mitochondrial chaperones and proteases. CRT increased the mitochondrial respiratory control index with tightened coupling when isolated mitochondria were reexposed to substrates for both complex I (glutamate and malate) and complex II (succinate), an effect likely related to ATP synthase subunit modifications and complex quantity and activity. CRT potently affects both the mitochondrial proteome and the performance associated with improved cardiac function.

  3. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  4. Large Scale CW ECRH Systems: Some considerations

    NASA Astrophysics Data System (ADS)

    Erckmann, V.; Kasparek, W.; Plaum, B.; Lechte, C.; Petelin, M. I.; Braune, H.; Gantenbein, G.; Laqua, H. P.; Lubiako, L.; Marushchenko, N. B.; Michel, G.; Turkin, Y.; Weissgerber, M.

    2012-09-01

    Electron Cyclotron Resonance Heating (ECRH) is a key component in the heating arsenal for the next step fusion devices like W7-X and ITER. These devices are equipped with superconducting coils and are designed to operate steady state. ECRH must thus operate in CW-mode with a large flexibility to comply with various physics demands such as plasma start-up, heating and current drive, as well as configurationand MHD - control. The request for many different sophisticated applications results in a growing complexity, which is in conflict with the request for high availability, reliability, and maintainability. `Advanced' ECRH-systems must, therefore, comply with both the complex physics demands and operational robustness and reliability. The W7-X ECRH system is the first CW- facility of an ITER relevant size and is used as a test bed for advanced components. Proposals for future developments are presented together with improvements of gyrotrons, transmission components and launchers.

  5. Large-scale preparation of plasmid DNA.

    PubMed

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  6. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems. We conclude that drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe and that some challenges remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage (e.g. aquifers) in large-scale models, also parametrisation of storage processes requires attention, for example through a global scale dataset on aquifer characteristics.

  7. Integrated and Quantitative Proteomics of Human Tumors.

    PubMed

    Yakkioui, Y; Temel, Y; Chevet, E; Negroni, L

    2017-01-01

    Quantitative proteomics represents a powerful approach for the comprehensive analysis of proteins expressed under defined conditions. These properties have been used to investigate the proteome of disease states, including cancer. It has become a major subject of studies to apply proteomics for biomarker and therapeutic target identification. In the last decades, technical advances in mass spectrometry have increased the capacity of protein identification and quantification. Moreover, the analysis of posttranslational modification (PTM), especially phosphorylation, has allowed large-scale identification of biological mechanisms. Even so, increasing evidence indicates that global protein quantification is often insufficient for the explanation of biology and has shown to pose challenges in identifying new and robust biomarkers. As a consequence, to improve the accuracy of the discoveries made using proteomics in human tumors, it is necessary to combine (i) robust and reproducible methods for sample preparation allowing statistical comparison, (ii) PTM analyses in addition to global proteomics for additional levels of knowledge, and (iii) use of bioinformatics for decrypting protein list. Herein, we present technical specificities for samples preparation involving isobaric tag labeling, TiO2-based phosphopeptides enrichment and hydrazyde-based glycopeptides purification as well as the key points for the quantitative analysis and interpretation of the protein lists. The method is based on our experience with tumors analysis derived from hepatocellular carcinoma, chondrosarcoma, human embryonic intervertebral disk, and chordoma experiments.

  8. Sheltering in buildings from large-scale outdoor releases

    SciTech Connect

    Chan, W.R.; Price, P.N.; Gadgil, A.J.

    2004-06-01

    Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.

  9. Using Web-Based Testing for Large-Scale Assessment.

    ERIC Educational Resources Information Center

    Hamilton, Laura S.; Klein, Stephen P.; Lorie, William

    This paper describes an approach to large-scale assessment that uses tests that are delivered to students over the Internet and that are tailored (adapted) to each student's own level of proficiency. A brief background on large-scale assessment is followed by a description of this new technology and an example. Issues that need to be investigated…

  10. Large Scale Geologic Controls on Hydraulic Stimulation

    NASA Astrophysics Data System (ADS)

    McLennan, J. D.; Bhide, R.

    2014-12-01

    When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).

  11. Challenges for large scale ab initio Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kent, Paul

    2015-03-01

    Ab initio Quantum Monte Carlo is an electronic structure method that is highly accurate, well suited to large scale computation, and potentially systematically improvable in accuracy. Due to increases in computer power, the method has been applied to systems where established electronic structure methods have difficulty reaching the accuracies desired to inform experiment without empiricism, a necessary step in the design of materials and a helpful step in the improvement of cheaper and less accurate methods. Recent applications include accurate phase diagrams of simple materials through to phenomena in transition metal oxides. Nevertheless there remain significant challenges to achieving a methodology that is robust and systematically improvable in practice, as well as capable of exploiting the latest generation of high-performance computers. In this talk I will describe the current state of the art, recent applications, and several significant challenges for continued improvement. Supported through the Predictive Theory and Modeling for Materials and Chemical Science program by the Office of Basic Energy Sciences (BES), Department of Energy (DOE).

  12. Large-scale Stratospheric Transport Processes

    NASA Technical Reports Server (NTRS)

    Plumb, R. Alan

    2003-01-01

    The PI has undertaken a theoretical analysis of the existence and nature of compact tracer-tracer relationships of the kind observed in the stratosphere, augmented with three-dimensional model simulations of stratospheric tracers (the latter being an extension of modeling work the group did during the SOLVE experiment). This work achieves a rigorous theoretical basis for the existence and shape of these relationships, as well as a quantitative theory of their width and evolution, in terms of the joint tracer-tracer PDF distribution. A paper on this work is almost complete and will soon be submitted to Rev. Geophys. We have analyzed lower stratospheric water in simulations with an isentropic-coordinate version of the MATCH transport model which we recently helped to develop. The three-dimensional structure of lower stratospheric water, in particular, attracted our attention: dry air is, below about 400K potential temperature, localized in the regions of the west Pacific and equatorial South America. We have been analyzing air trajectories to determine how air passes through the tropopause cold trap. This work is now being completed, and a paper will be submitted to Geophys. Res. Lett. before the end of summer. We are continuing to perform experiments with the 'MATCH' CTM, in both sigma- and entropy-coordinate forms. We earlier found (in collaboration with Dr Natalie Mahowald, and as part of an NSF-funded project) that switching to isentropic coordinates made a substantial improvement to the simulation of the age of stratospheric air. We are now running experiments with near-tropopause sources in both versions of the model, to see if and to what extent the simulation of stratosphere-troposphere transport is dependent on the model coordinate. Personnel Research is supervised by the PI, Prof. Alan Plumb. Mr William Heres conducts the tracer modeling work and performs other modeling tasks. Two graduate students, Ms Irene Lee and Mr Michael Ring, have been participating

  13. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  14. Effective leveraging of targeted search spaces for improving peptide identification in MS/MS based proteomics

    PubMed Central

    Shanmugam, Avinash K.; Nesvizhski, Alexey I.

    2016-01-01

    In shotgun proteomics, peptides are typically identified using database searching which involves scoring acquired tandem mass spectra against peptides derived from standard protein sequence databases such as Uniprot, Refseq, or Ensembl. In this strategy, the sensitivity of peptide identification is known to be affected by the size of the search space. Therefore, creating a targeted sequence database containing only peptides likely to be present in the analyzed sample can be a useful technique for improving the sensitivity of peptide identification. In this study we describe how targeted peptide databases can be created based on the frequency of identification in GPMDB – the largest publicly available repository of peptide and protein identification data. We demonstrate that targeted peptide databases can be easily integrated into existing proteome analysis workflows, and describe a computational strategy for minimizing any loss of peptide identifications arising from potential search space incompleteness in the targeted search spaces. We demonstrate the performance of our workflow using several datasets of varying size and sample complexity. PMID:26569054

  15. Improving large-scale groundwater models by considering fossil gradients

    NASA Astrophysics Data System (ADS)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  16. The role of large-scale, extratropical dynamics in climate change

    SciTech Connect

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  17. Proteome changes underpin improved meat quality and yield of chickens (Gallus gallus) fed the probiotic Enterococcus faecium.

    PubMed

    Zheng, Aijuan; Luo, Jianjie; Meng, Kun; Li, Jianke; Zhang, Shu; Li, Ke; Liu, Guohua; Cai, Huiyi; Bryden, Wayne L; Yao, Bin

    2014-12-23

    Supplementation of broiler chicken diets with probiotics may improve carcass characteristics and meat quality. However, the underlying molecular mechanism remains unclear. In the present study, 2D-DIGE-based proteomics was employed to investigate the proteome changes associated with improved carcass traits and meat quality of Arbor Acres broilers (Gallus gallus) fed the probiotic Enterococcus faecium. The probiotic significantly increased meat colour, water holding capacity and pH of pectoral muscle but decreased abdominal fat content. These meat quality changes were related to the altered abundance of 22 proteins in the pectoral muscle following E. faecium feeding. Of these, 17 proteins have central roles in regulating meat quality due to their biological interaction network. Altered cytoskeletal and chaperon protein expression also contribute to improved water holding capacity and colour of meat, which suggests that upregulation of chaperon proteins maintains cell integrity and prevents moisture loss by enhancing folding and recovery of the membrane and cytoskeletal proteins. The down-regulation of β-enolase and pyruvate kinase muscle isozymes suggests roles in increasing the pH of meat by decreasing the production of lactic acid. The validity of the proteomics results was further confirmed by qPCR. This study reveals that improved meat quality of broilers fed probiotics is triggered by proteome alterations (especially the glycolytic proteins), and provides a new insight into the mechanism by which probiotics improve poultry production.

  18. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  19. Organised convection embedded in a large-scale flow

    NASA Astrophysics Data System (ADS)

    Naumann, Ann Kristin; Stevens, Bjorn; Hohenegger, Cathy

    2017-04-01

    In idealised simulations of radiative convective equilibrium, convection aggregates spontaneously from randomly distributed convective cells into organized mesoscale convection despite homogeneous boundary conditions. Although these simulations apply very idealised setups, the process of self-aggregation is thought to be relevant for the development of tropical convective systems. One feature that idealised simulations usually neglect is the occurrence of a large-scale background flow. In the tropics, organised convection is embedded in a large-scale circulation system, which advects convection in along-wind direction and alters near surface convergence in the convective areas. A large-scale flow also modifies the surface fluxes, which are expected to be enhanced upwind of the convective area if a large-scale flow is applied. Convective clusters that are embedded in a large-scale flow therefore experience an asymmetric component of the surface fluxes, which influences the development and the pathway of a convective cluster. In this study, we use numerical simulations with explicit convection and add a large-scale flow to the established setup of radiative convective equilibrium. We then analyse how aggregated convection evolves when being exposed to wind forcing. The simulations suggest that convective line structures are more prevalent if a large-scale flow is present and that convective clusters move considerably slower than advection by the large-scale flow would suggest. We also study the asymmetric component of convective aggregation due to enhanced surface fluxes, and discuss the pathway and speed of convective clusters as a function of the large-scale wind speed.

  20. The expanding proteome of the molecular chaperone HSP90

    PubMed Central

    Samant, Rahul S; Clarke, Paul A

    2012-01-01

    The molecular chaperone HSP90 maintains the activity and stability of a diverse set of “client” proteins that play key roles in normal and disease biology. Around 20 HSP90 inhibitors that deplete the oncogenic clientele have entered clinical trials for cancer. However, the full extent of the HSP90-dependent proteome, which encompasses not only clients but also proteins modulated by downstream transcriptional responses, is still incompletely characterized and poorly understood. Earlier large-scale efforts to define the HSP90 proteome have been valuable but are incomplete because of limited technical sensitivity. Here, we discuss previous large-scale surveys of proteome perturbations induced by HSP90 inhibitors in light of a significant new study using state-of-the-art stable isotope labeling by amino acids (SILAC) technology combined with more sensitive high-resolution mass spectrometry (MS) that extends the catalog of proteomic changes in inhibitor-treated cancer cells. Among wide-ranging changes, major functional responses include downregulation of protein kinase activity and the DNA damage response alongside upregulation of the protein degradation machinery. Despite this improved proteomic coverage, there was surprisingly little overlap with previous studies. This may be due in part to technical issues but is likely also due to the variability of the HSP90 proteome with the inhibitor conditions used, the cancer cell type and the genetic status of client proteins. We suggest future proteomic studies to address these factors, to help distinguish client protein components from indirect transcriptional components and to address other key questions in fundamental and translational HSP90 research. Such studies should also reveal new biomarkers for patient selection and novel targets for therapeutic intervention. PMID:22421145

  1. Food peptidomics: large scale analysis of small bioactive peptides--a pilot study.

    PubMed

    Lahrichi, Sabine L; Affolter, Michael; Zolezzi, Irma Silva; Panchaud, Alexandre

    2013-08-02

    Food peptidomics deals in part with the identification and quantification of nutritionally relevant peptides which are called bioactive peptides. This category of peptides comprises large, medium to small peptides. However, small peptides (2-6 amino acids) represent by far the largest category. Such molecules sit at the interface of both the world of proteomics and small molecule. The purpose of this study was to evaluate the feasibility of developing an LC-MSMS based method to measure such small peptides at a large scale that is representative of the hundreds of known small bioactive peptides. In order to do that we selected a very complex and homogeneous peptide set in terms of chemical and physical properties. This peptide set comprised only di, tri- and tetrapeptides made out of the three branched chain amino acids (valine, leucine and isoleucine). Results showed that at least 60% of these 117 peptides can be uniquely identified although many are isobaric and co-eluting. Moreover, identical results were obtained when spiked into a complex matrix, i.e. hydrolyzed whey protein. In conclusion, these results support the feasibility of a large scale approach and open the door to further development for all potential small bioactive peptides known so far. Bioactive peptides are a key category of molecules for functional food application. Most known bioactive peptides are small (less than 5 amino acids) and hence represent a challenge in terms of analysis when using current proteomics techniques. Therefore development of the food peptidomics field through high throughput large scale assays for these molecules is mandatory in the future to better conduct research in this field. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Methods for Ranking and Selection in Large-Scale Inference

    NASA Astrophysics Data System (ADS)

    Henderson, Nicholas C.

    This thesis addresses two distinct problems: one related to ranking and selection for large-scale inference and another related to latent class modeling of longitudinal count data. The first part of the thesis focuses on the problem of identifying leading measurement units from a large collection with a focus on settings with differing levels of estimation precision across measurement units. The main approach presented is a Bayesian ranking procedure that populates the list of top units in a way that maximizes the expected overlap between the true and reported top lists for all list sizes. This procedure relates unit-specific posterior upper tail probabilities with their empirical distribution to yield a ranking variable. It discounts high-variance units less than other common methods and thus achieves improved operating characteristics in the models considered. In the second part of the thesis, we introduce and describe a finite mixture model for longitudinal count data where, conditional on the class label, the subject-specific observations are assumed to arise from a discrete autoregressive process. This approach offers notable computational advantages over related methods due to the within-class closed form of the likelihood function and, as we describe, has a within-class correlation structure which improves model identifiability. We also outline computational strategies for estimating model parameters, and we describe a novel measure of the underlying separation between latent classes and discuss its relation to posterior classification.

  3. Improving data quality and preserving HCD-generated reporter ions with EThcD for isobaric tag-based quantitative proteomics and proteome-wide PTM studies.

    PubMed

    Yu, Qing; Shi, Xudong; Feng, Yu; Kent, K Craig; Li, Lingjun

    2017-05-22

    Mass spectrometry (MS)-based isobaric labeling has undergone rapid development in recent years due to its capability for high throughput quantitation. Apart from its originally designed use with collision-induced dissociation (CID) and higher-energy collisional dissociation (HCD), isobaric tagging technique could also work with electron-transfer dissociation (ETD), which provides complementarity to CID and is preferred in sequencing peptides with post-translational modifications (PTMs). However, ETD suffers from long reaction time, reduced duty cycle and bias against peptides with lower charge states. In addition, common fragmentation mechanism in ETD results in altered reporter ion production, decreased multiplexing capability, and even loss of quantitation capability for some of the isobaric tags, including custom-designed dimethyl leucine (DiLeu) tags. Here, we demonstrate a novel electron-transfer/higher-energy collision dissociation (EThcD) approach that preserves original reporter ion channels, mitigates bias against lower charge states, improves sensitivity, and significantly improves data quality for quantitative proteomics and proteome-wide PTM studies. Systematic optimization was performed to achieve a balance between data quality and sensitivity. We provide direct comparison of EThcD with ETD and HCD for DiLeu- and TMT-labeled HEK cell lysate and IMAC enriched phosphopeptides. Results demonstrate improved data quality and phosphorylation localization accuracy while preserving sufficient reporter ion production. Biological studies were performed to investigate phosphorylation changes in a mouse vascular smooth muscle cell line treated with four different conditions. Overall, EThcD exhibits superior performance compared to conventional ETD and offers distinct advantages compared to HCD in isobaric labeling based quantitative proteomics and quantitative PTM studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Application of an improved proteomics method for abundant protein cleanup: molecular and genomic mechanisms study in plant defense.

    PubMed

    Zhang, Yixiang; Gao, Peng; Xing, Zhuo; Jin, Shumei; Chen, Zhide; Liu, Lantao; Constantino, Nasie; Wang, Xinwang; Shi, Weibing; Yuan, Joshua S; Dai, Susie Y

    2013-11-01

    High abundance proteins like ribulose-1,5-bisphosphate carboxylase oxygenase (Rubisco) impose a consistent challenge for the whole proteome characterization using shot-gun proteomics. To address this challenge, we developed and evaluated Polyethyleneimine Assisted Rubisco Cleanup (PARC) as a new method by combining both abundant protein removal and fractionation. The new approach was applied to a plant insect interaction study to validate the platform and investigate mechanisms for plant defense against herbivorous insects. Our results indicated that PARC can effectively remove Rubisco, improve the protein identification, and discover almost three times more differentially regulated proteins. The significantly enhanced shot-gun proteomics performance was translated into in-depth proteomic and molecular mechanisms for plant insect interaction, where carbon re-distribution was used to play an essential role. Moreover, the transcriptomic validation also confirmed the reliability of PARC analysis. Finally, functional studies were carried out for two differentially regulated genes as revealed by PARC analysis. Insect resistance was induced by over-expressing either jacalin-like or cupin-like genes in rice. The results further highlighted that PARC can serve as an effective strategy for proteomics analysis and gene discovery.

  5. Statistical control of peptide and protein error rates in large-scale targeted data-independent acquisition analyses.

    PubMed

    Rosenberger, George; Bludau, Isabell; Schmitt, Uwe; Heusel, Moritz; Hunter, Christie L; Liu, Yansheng; MacCoss, Michael J; MacLean, Brendan X; Nesvizhskii, Alexey I; Pedrioli, Patrick G A; Reiter, Lukas; Röst, Hannes L; Tate, Stephen; Ting, Ying S; Collins, Ben C; Aebersold, Ruedi

    2017-09-01

    Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) is the main method for high-throughput identification and quantification of peptides and inferred proteins. Within this field, data-independent acquisition (DIA) combined with peptide-centric scoring, as exemplified by the technique SWATH-MS, has emerged as a scalable method to achieve deep and consistent proteome coverage across large-scale data sets. We demonstrate that statistical concepts developed for discovery proteomics based on spectrum-centric scoring can be adapted to large-scale DIA experiments that have been analyzed with peptide-centric scoring strategies, and we provide guidance on their application. We show that optimal tradeoffs between sensitivity and specificity require careful considerations of the relationship between proteins in the samples and proteins represented in the spectral library. We propose the application of a global analyte constraint to prevent the accumulation of false positives across large-scale data sets. Furthermore, to increase the quality and reproducibility of published proteomic results, well-established confidence criteria should be reported for the detected peptide queries, peptides and inferred proteins.

  6. Electrospray ionization in concentrated acetonitrile vapor improves the performance of mass spectrometry for proteomic analyses.

    PubMed

    Chen, Jin; Wang, Fangjun; Liu, Zheyi; Liu, Jing; Zhu, Yixin; Zhang, Yukui; Zou, Hanfa

    2017-02-03

    Suppressing the background interferences and enhancing the analytes signals are long-term goals in high performance electrospray ionization mass spectrometry (ESI-MS) analyses. We observed that performing electrospray in the presence of a concentrated acetonitrile atmosphere suppresses background interferences and enhances peptide signals. An enclosed nanoESI source was utilized to provide a stable atmosphere of concentrated acetonitrile vapor for high performance ESI-MS analyses. The median MS signal intensity increased by 5 times for a set of 23 BSA tryptic peptides in direct ESI-MS analysis. Further, the number of reproducibly and precisely quantified peptides could be improved 67% in six replicate label-free quantitative proteome analyses by this strategy.

  7. The Status of Large-Scale Assessment in the Pacific Region. REL Technical Brief. REL 2008-No. 003

    ERIC Educational Resources Information Center

    Ryan, Jennifer; Keir, Scott

    2008-01-01

    This technical brief describes the large-scale assessment measures and practices used in the jurisdictions served by the Pacific Regional Educational Laboratory. The need for effective large-scale assessment was identified as a major priority for improving student achievement in the Pacific Region jurisdictions: American Samoa, Guam, Hawaii, the…

  8. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  9. Modified gravity and large scale flows, a review

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  10. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Verma, Mahendra K.

    2017-09-01

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  11. A systematic model of the LC-MS proteomics pipeline

    PubMed Central

    2012-01-01

    Motivation Mass spectrometry is a complex technique used for large-scale protein profiling with clinical and pharmaceutical applications. While individual components in the system have been studied extensively, little work has been done to integrate various modules and evaluate them from a systems point of view. Results In this work, we investigate this problem by putting together the different modules in a typical proteomics work flow, in order to capture and analyze key factors that impact the number of identified peptides and quantified proteins, protein quantification error, differential expression results, and classification performance. The proposed proteomics pipeline model can be used to optimize the work flow as well as to pinpoint critical bottlenecks worth investing time and resources into for improving performance. Using the model-based approach proposed here, one can study systematically the critical problem of proteomic biomarker discovery, by means of simulation using ground-truthed synthetic MS data. PMID:23134670

  12. An Adaptive Multiscale Finite Element Method for Large Scale Simulations

    DTIC Science & Technology

    2015-09-28

    the method . Using the above definitions , the weak statement of the non-linear local problem at the kth 4 DISTRIBUTION A: Distribution approved for...AFRL-AFOSR-VA-TR-2015-0305 An Adaptive Multiscale Finite Element Method for Large Scale Simulations Carlos Duarte UNIVERSITY OF ILLINOIS CHAMPAIGN...14-07-2015 4. TITLE AND SUBTITLE An Adaptive Multiscale Generalized Finite Element Method for Large Scale Simulations 5a.  CONTRACT NUMBER 5b

  13. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  14. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  15. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  16. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  17. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  18. A large-scale proteogenomics study of apicomplexan pathogens—Toxoplasma gondii and Neospora caninum

    PubMed Central

    Krishna, Ritesh; Xia, Dong; Sanderson, Sanya; Shanmugasundram, Achchuthan; Vermont, Sarah; Bernal, Axel; Daniel-Naguib, Gianluca; Ghali, Fawaz; Brunk, Brian P; Roos, David S; Wastling, Jonathan M; Jones, Andrew R

    2015-01-01

    Proteomics data can supplement genome annotation efforts, for example being used to confirm gene models or correct gene annotation errors. Here, we present a large-scale proteogenomics study of two important apicomplexan pathogens: Toxoplasma gondii and Neospora caninum. We queried proteomics data against a panel of official and alternate gene models generated directly from RNASeq data, using several newly generated and some previously published MS datasets for this meta-analysis. We identified a total of 201 996 and 39 953 peptide-spectrum matches for T. gondii and N. caninum, respectively, at a 1% peptide FDR threshold. This equated to the identification of 30 494 distinct peptide sequences and 2921 proteins (matches to official gene models) for T. gondii, and 8911 peptides/1273 proteins for N. caninum following stringent protein-level thresholding. We have also identified 289 and 140 loci for T. gondii and N. caninum, respectively, which mapped to RNA-Seq-derived gene models used in our analysis and apparently absent from the official annotation (release 10 from EuPathDB) of these species. We present several examples in our study where the RNA-Seq evidence can help in correction of the current gene model and can help in discovery of potential new genes. The findings of this study have been integrated into the EuPathDB. The data have been deposited to the ProteomeXchange with identifiers PXD000297and PXD000298. PMID:25867681

  19. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems. We conclude that most drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe. Challenges, however, remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. This leads to a high uncertainty in hydrological drought simulation at large scales. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage in large-scale models, also parametrisation of storage processes requires attention, for example through a global-scale dataset on aquifer characteristics, improved large-scale datasets on other land characteristics (e.g. soils, land cover), and calibration/evaluation of the models against observations of storage (e.g. in snow, groundwater).

  20. A new large-scale manufacturing platform for complex biopharmaceuticals.

    PubMed

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals.

  1. Assessing large-scale wildlife responses to human infrastructure development

    PubMed Central

    Torres, Aurora; Jaeger, Jochen A. G.; Alonso, Juan Carlos

    2016-01-01

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future. PMID:27402749

  2. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  3. Scalable NIC-based reduction on large-scale clusters

    SciTech Connect

    Moody, A.; Fernández, J. C.; Petrini, F.; Panda, Dhabaleswar K.

    2003-01-01

    Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scale clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library

  4. Open TG-GATEs: a large-scale toxicogenomics database.

    PubMed

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html.

  5. Planck intermediate results. XLII. Large-scale Galactic magnetic fields

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Alves, M. I. R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chiang, H. C.; Christensen, P. R.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dolag, K.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Ferrière, K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hobson, M.; Hornstrup, A.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Oppermann, N.; Orlando, E.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-12-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.

  6. Scalable pattern recognition for large-scale scientific data mining

    SciTech Connect

    Kamath, C.; Musick, R.

    1998-03-23

    Our ability to generate data far outstrips our ability to explore and understand it. The true value of this data lies not in its final size or complexity, but rather in our ability to exploit the data to achieve scientific goals. The data generated by programs such as ASCI have such a large scale that it is impractical to manually analyze, explore, and understand it. As a result, useful information is overlooked, and the potential benefits of increased computational and data gathering capabilities are only partially realized. The difficulties that will be faced by ASCI applications in the near future are foreshadowed by the challenges currently facing astrophysicists in making full use of the data they have collected over the years. For example, among other difficulties, astrophysicists have expressed concern that the sheer size of their data restricts them to looking at very small, narrow portions at any one time. This narrow focus has resulted in the loss of ``serendipitous`` discoveries which have been so vital to progress in the area in the past. To solve this problem, a new generation of computational tools and techniques is needed to help automate the exploration and management of large scientific data. This whitepaper proposes applying and extending ideas from the area of data mining, in particular pattern recognition, to improve the way in which scientists interact with large, multi-dimensional, time-varying data.

  7. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  8. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  9. MULTIPLE TESTING VIA FDRL FOR LARGE SCALE IMAGING DATA

    PubMed Central

    Zhang, Chunming; Fan, Jianqing; Yu, Tao

    2010-01-01

    The multiple testing procedure plays an important role in detecting the presence of spatial signals for large scale imaging data. Typically, the spatial signals are sparse but clustered. This paper provides empirical evidence that for a range of commonly used control levels, the conventional FDR procedure can lack the ability to detect statistical significance, even if the p-values under the true null hypotheses are independent and uniformly distributed; more generally, ignoring the neighboring information of spatially structured data will tend to diminish the detection effectiveness of the FDR procedure. This paper first introduces a scalar quantity to characterize the extent to which the “lack of identification phenomenon” (LIP) of the FDR procedure occurs. Second, we propose a new multiple comparison procedure, called FDRL, to accommodate the spatial information of neighboring p-values, via a local aggregation of p-values. Theoretical properties of the FDRL procedure are investigated under weak dependence of p-values. It is shown that the FDRL procedure alleviates the LIP of the FDR procedure, thus substantially facilitating the selection of more stringent control levels. Simulation evaluations indicate that the FDRL procedure improves the detection sensitivity of the FDR procedure with little loss in detection specificity. The computational simplicity and detection effectiveness of the FDRL procedure are illustrated through a real brain fMRI dataset. PMID:21643445

  10. Determining Environmental Impacts of Large Scale Irrigation in Turkey

    NASA Astrophysics Data System (ADS)

    Simpson, K.; Douglas, E. M.; Limbrunner, J. F.; Ozertan, G.

    2010-12-01

    In 1989, the Turkish government launched their most comprehensive regional development plan in history entitled the Southeastern Anatolia Project (SAP) which focuses on improving the quality of life and income level within the most underdeveloped region in Turkey. This project aims to integrate sustainable human development through agriculture, industry, transportation, education, health and rural and urban infrastructure building. In May 2008, a new action plan was announced for the region which includes the designation of almost 800,000 hectares of previously unirrigated land to be open for irrigation within the next five years. If not done in a sustainable manner, such a large-scale irrigation project could cause severe environmental impacts. The first objective of our research is to use computer simulations to reproduce the observed environmental impacts of irrigated agriculture in this arid region, primarily by simulating the effects of soil salinization. The second objective of our research is to estimate soil salinization that could result from expanded irrigation and suggest sustainable strategies for the newly irrigated land in Turkey in order to minimize these environmental impacts.

  11. Locating inefficient links in a large-scale transportation network

    NASA Astrophysics Data System (ADS)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  12. Optimal management of large scale aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, H.; Kokkinaki, A.; Kitanidis, P. K.; Darve, E. F.

    2016-12-01

    Water resources systems, and especially groundwater reservoirs, are a valuable resource that is often being endangered by contamination and over-exploitation. Optimal control techniques can be applied for groundwater management to ensure the long-term sustainability of this vulnerable resource. Linear Quadratic Gaussian (LQG) control is an optimal control method that combines a Kalman filter for real time estimation with a linear quadratic regulator for dynamic optimization. The LQG controller can be used to determine the optimal controls (e.g. pumping schedule) upon receiving feedback about the system from incomplete noisy measurements. However, applying LQG control for systems of large dimension is computationally expensive. This work presents the Spectral Linear Quadratic Gaussian (SpecLQG) control, a new fast LQG controller that can be used for large scale problems. SpecLQG control combines the Spectral Kalman filter, which is a fast Kalman filter algorithm, with an efficient low rank LQR, and provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification and optimal control for linear and weakly non-linear systems. The computational cost of SpecLQG controller scales linearly with the number of unknowns, a great improvement compared to the quadratic cost of basic LQG. We demonstrate the accuracy and computational efficiency of SpecLQG control using two applications: first, a linear validation case for pumping schedule management in a small homogeneous confined aquifer; and second, a larger scale nonlinear case with unknown heterogeneities in aquifer properties and boundary conditions.

  13. Assessing large-scale wildlife responses to human infrastructure development.

    PubMed

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.

  14. Uplink channel estimation error for large scale MIMO system

    NASA Astrophysics Data System (ADS)

    Albdran, Saleh; Alshammari, Ahmad; Matin, Mohammad

    2016-09-01

    The high demand on the wireless networks and the need for higher data rates are the motivation to develop new technologies. Recently, the idea of using large-scale MIMO systems has grabbed great attention from the researchers due to its high spectral and energy efficiency. In this paper, we analyze the UL channel estimation error using large number of antennas in the base station where the UL channel is based on predefined pilot signal. By making a comparison between the identified UL pilot signal and the received UL signal we can get the realization of the channel. We choose to deal with one cell scenario where the effect of inter-cell interference is eliminated for the sake of studying simple approach. While the number of antennas is very large in the base station side, we choose to have one antennal in the user terminal side. We choose to have two models to generate the channel covariance matrix includes one-ring model and exponential correlation model. Figures of channel estimation error are generated where the performance of the mean square error MSE per antenna is presented as a function signal to noise ratio SNR. The simulation results show that the higher the SNR the better the performance. Furthermore, the affect of the pilot length on the channel estimation error is studied where two different covariance models are used to see the impact of the two cases. In the two cases, the increase of the pilot length has improved the estimation accuracy.

  15. Large-scale physical activity data reveal worldwide activity inequality.

    PubMed

    Althoff, Tim; Sosič, Rok; Hicks, Jennifer L; King, Abby C; Delp, Scott L; Leskovec, Jure

    2017-07-20

    To be able to curb the global pandemic of physical inactivity and the associated 5.3 million deaths per year, we need to understand the basic principles that govern physical activity. However, there is a lack of large-scale measurements of physical activity patterns across free-living populations worldwide. Here we leverage the wide usage of smartphones with built-in accelerometry to measure physical activity at the global scale. We study a dataset consisting of 68 million days of physical activity for 717,527 people, giving us a window into activity in 111 countries across the globe. We find inequality in how activity is distributed within countries and that this inequality is a better predictor of obesity prevalence in the population than average activity volume. Reduced activity in females contributes to a large portion of the observed activity inequality. Aspects of the built environment, such as the walkability of a city, are associated with a smaller gender gap in activity and lower activity inequality. In more walkable cities, activity is greater throughout the day and throughout the week, across age, gender, and body mass index (BMI) groups, with the greatest increases in activity found for females. Our findings have implications for global public health policy and urban planning and highlight the role of activity inequality and the built environment in improving physical activity and health.

  16. Improving protein and proteome coverage through data-independent multiplexed peptide fragmentation.

    PubMed

    Blackburn, Kevin; Mbeunkui, Flaubert; Mitra, Srijeet K; Mentzel, Tobias; Goshe, Michael B

    2010-07-02

    Performance differences in protein and proteome characterization achieved by data-independent acquisition (DIA) LC/MS(E) and data-dependent acquisition (DDA) LC/MS/MS approaches were investigated. LC/MS(E) is a novel mode of generating product ion data for all coeluting precursors in parallel as opposed to LC/MS/MS where coeluting precursors must be serially fragmented one at a time. During LC/MS(E) analysis, alternating MS scans of "normal" and "elevated" collision energy are collected at regular intervals, providing nearly a 100% duty cycle for precursor detection and fragmentation because all precursors are fragmented across their full chromatographic elution profile. This is in contrast to DDA-based MS/MS where serial selection of precursor ions is biased toward interrogation and detection of the highest abundance sample components by virtue of the intensity-driven interrogation scheme employed. Both modes of acquisition were applied to a simple four-protein standard mixture with a 16-fold dynamic range in concentration, an in-gel digest of the Arabidopsis thaliana protein FLS2 purified by immunoprecipitation, and a solution-digested tomato leaf proteome sample. Dramatic improvement for individual protein sequence coverage was obtained for all three samples analyzed by the DIA approach, particularly for the lowest abundance sample components. In many instances, precursors readily detected and identified during DIA were either interrogated by MS/MS during DDA at inopportune points in their chromatographic elution profiles resulting in poor quality product ion spectra or not interrogated at all. Detailed evaluation of both DDA and DIA raw data and timing of the MS-to-MS/MS switching events clearly revealed the fundamental limitations of serial MS/MS interrogation and the advantages of parallel fragmentation by DIA for more comprehensive protein identification and characterization which holds promise for enhanced isoform and post-translational modification

  17. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    PubMed

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication.

  18. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  19. Applications of Proteomic Technologies to Toxicology

    EPA Science Inventory

    Proteomics is the large-scale study of gene expression at the protein level. This cutting edge technology has been extensively applied to toxicology research recently. The up-to-date development of proteomics has presented the toxicology community with an unprecedented opportunit...

  20. Applications of Proteomic Technologies to Toxicology

    EPA Science Inventory

    Proteomics is the large-scale study of gene expression at the protein level. This cutting edge technology has been extensively applied to toxicology research recently. The up-to-date development of proteomics has presented the toxicology community with an unprecedented opportunit...

  1. A large-scale electrophoresis- and chromatography-based determination of gene expression profiles in bovine brain capillary endothelial cells after the re-induction of blood-brain barrier properties

    PubMed Central

    2010-01-01

    Background Brain capillary endothelial cells (BCECs) form the physiological basis of the blood-brain barrier (BBB). The barrier function is (at least in part) due to well-known proteins such as transporters, tight junctions and metabolic barrier proteins (e.g. monoamine oxidase, gamma glutamyltranspeptidase and P-glycoprotein). Our previous 2-dimensional gel proteome analysis had identified a large number of proteins and revealed the major role of dynamic cytoskeletal remodelling in the differentiation of bovine BCECs. The aim of the present study was to elaborate a reference proteome of Triton X-100-soluble species from bovine BCECs cultured in the well-established in vitro BBB model developed in our laboratory. Results A total of 215 protein spots (corresponding to 130 distinct proteins) were identified by 2-dimensional gel electrophoresis, whereas over 350 proteins were identified by a shotgun approach. We classified around 430 distinct proteins expressed by bovine BCECs. Our large-scale gene expression analysis enabled the correction of mistakes referenced into protein databases (e.g. bovine vinculin) and constitutes valuable evidence for predictions based on genome annotation. Conclusions Elaboration of a reference proteome constitutes the first step in creating a gene expression database dedicated to capillary endothelial cells displaying BBB characteristics. It improves of our knowledge of the BBB and the key proteins in cell structures, cytoskeleton organization, metabolism, detoxification and drug resistance. Moreover, our results emphasize the need for both appropriate experimental design and correct interpretation of proteome datasets. PMID:21078152

  2. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    PubMed Central

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology. PMID:24191145

  3. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  4. Planning and executing complex large-scale exercises.

    PubMed

    McCormick, Lisa C; Hites, Lisle; Wakelee, Jessica F; Rucks, Andrew C; Ginter, Peter M

    2014-01-01

    Increasingly, public health departments are designing and engaging in complex operations-based full-scale exercises to test multiple public health preparedness response functions. The Department of Homeland Security's Homeland Security Exercise and Evaluation Program (HSEEP) supplies benchmark guidelines that provide a framework for both the design and the evaluation of drills and exercises; however, the HSEEP framework does not seem to have been designed to manage the development and evaluation of multiple, operations-based, parallel exercises combined into 1 complex large-scale event. Lessons learned from the planning of the Mississippi State Department of Health Emergency Support Function--8 involvement in National Level Exercise 2011 were used to develop an expanded exercise planning model that is HSEEP compliant but accounts for increased exercise complexity and is more functional for public health. The Expanded HSEEP (E-HSEEP) model was developed through changes in the HSEEP exercise planning process in areas of Exercise Plan, Controller/Evaluator Handbook, Evaluation Plan, and After Action Report and Improvement Plan development. The E-HSEEP model was tested and refined during the planning and evaluation of Mississippi's State-level Emergency Support Function-8 exercises in 2012 and 2013. As a result of using the E-HSEEP model, Mississippi State Department of Health was able to capture strengths, lessons learned, and areas for improvement, and identify microlevel issues that may have been missed using the traditional HSEEP framework. The South Central Preparedness and Emergency Response Learning Center is working to create an Excel-based E-HSEEP tool that will allow practice partners to build a database to track corrective actions and conduct many different types of analyses and comparisons.

  5. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  6. Recursive architecture for large-scale adaptive system

    NASA Astrophysics Data System (ADS)

    Hanahara, Kazuyuki; Sugiyama, Yoshihiko

    1994-09-01

    'Large scale' is one of major trends in the research and development of recent engineering, especially in the field of aerospace structural system. This term expresses the large scale of an artifact in general, however, it also implies the large number of the components which make up the artifact in usual. Considering a large scale system which is especially used in remote space or deep-sea, such a system should be adaptive as well as robust by itself, because its control as well as maintenance by human operators are not easy due to the remoteness. An approach to realizing this large scale, adaptive and robust system is to build the system as an assemblage of components which are respectively adaptive by themselves. In this case, the robustness of the system can be achieved by using a large number of such components and suitable adaptation as well as maintenance strategies. Such a system gathers many research's interest and their studies such as decentralized motion control, configurating algorithm and characteristics of structural elements are reported. In this article, a recursive architecture concept is developed and discussed towards the realization of large scale system which consists of a number of uniform adaptive components. We propose an adaptation strategy based on the architecture and its implementation by means of hierarchically connected processing units. The robustness and the restoration from degeneration of the processing unit are also discussed. Two- and three-dimensional adaptive truss structures are conceptually designed based on the recursive architecture.

  7. The Influence of Large-scale Environments on Galaxy Properties

    NASA Astrophysics Data System (ADS)

    Wei, Yu-qing; Wang, Lei; Dai, Cai-ping

    2017-07-01

    The star formation properties of galaxies and their dependence on environments play an important role for understanding the formation and evolution of galaxies. Using the galaxy sample of the Sloan Digital Sky Survey (SDSS), different research groups have studied the physical properties of galaxies and their large-scale environments. Here, using the filament catalog from Tempel et al. and the galaxy catalog of large-scale structure classification from Wang et al., and taking the influence of the galaxy morphology, high/low local density environment, and central (satellite) galaxy into consideration, we have found that the properties of galaxies are correlated with their residential large-scale environments: the SSFR (specific star formation rate) and SFR (star formation rate) strongly depend on the large-scale environment for spiral galaxies and satellite galaxies, but this dependence is very weak for elliptical galaxies and central galaxies, and the influence of large-scale environments on galaxies in low density region is more sensitive than that in high density region. The above conclusions remain valid even for the galaxies with the same mass. In addition, the SSFR distributions derived from the catalogs of Tempel et al. and Wang et al. are not entirely consistent.

  8. Using the high-level based program interface to facilitate the large scale scientific computing.

    PubMed

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications.

  9. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  10. The use of time-resolved fluorescence in gel-based proteomics for improved biomarker discovery

    NASA Astrophysics Data System (ADS)

    Sandberg, AnnSofi; Buschmann, Volker; Kapusta, Peter; Erdmann, Rainer; Wheelock, Åsa M.

    2010-02-01

    This paper describes a new platform for quantitative intact proteomics, entitled Cumulative Time-resolved Emission 2-Dimensional Gel Electrophoresis (CuTEDGE). The CuTEDGE technology utilizes differences in fluorescent lifetimes to subtract the confounding background fluorescence during in-gel detection and quantification of proteins, resulting in a drastic improvement in both sensitivity and dynamic range compared to existing technology. The platform is primarily designed for image acquisition in 2-dimensional gel electrophoresis (2-DE), but is also applicable to 1-dimensional gel electrophoresis (1-DE), and proteins electroblotted to membranes. In a set of proof-of-principle measurements, we have evaluated the performance of the novel technology using the MicroTime 100 instrument (PicoQuant GmbH) in conjunction with the CyDye minimal labeling fluorochromes (GE Healthcare, Uppsala, Sweden) to perform differential gel electrophoresis (DIGE) analyses. The results indicate that the CuTEDGE technology provides an improvement in the dynamic range and sensitivity of detection of 3 orders of magnitude as compared to current state-of-the-art image acquisition instrumentation available for 2-DE (Typhoon 9410, GE Healthcare). Given the potential dynamic range of 7-8 orders of magnitude and sensitivities in the attomol range, the described invention represents a technological leap in detection of low abundance cellular proteins, which is desperately needed in the field of biomarker discovery.

  11. Measurement repeatability of a large-scale inventory of forest fuels

    Treesearch

    J.A. Westfall; C.W. Woodall

    2007-01-01

    An efficient and accurate inventory of forest fuels at large scales is critical for assessment of forest fire hazards across landscapes. The Forest Inventory and Analysis (FIA) program of the USDA Forest Service conducts a national inventory of fuels along with blind remeasurement of a portion of inventory plots to monitor and improve data quality. The goal of this...

  12. Inquiry-Based Educational Design for Large-Scale High School Astronomy Projects Using Real Telescopes

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena

    2015-01-01

    In this paper, we outline the theory behind the educational design used to implement a large-scale high school astronomy education project. This design was created in response to the realization of ineffective educational design in the initial early stages of the project. The new design follows an iterative improvement model where the materials…

  13. Inquiry-Based Educational Design for Large-Scale High School Astronomy Projects Using Real Telescopes

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena

    2015-01-01

    In this paper, we outline the theory behind the educational design used to implement a large-scale high school astronomy education project. This design was created in response to the realization of ineffective educational design in the initial early stages of the project. The new design follows an iterative improvement model where the materials…

  14. Jacobsen protocols for large-scale epoxidation of cyclic dienyl sulfones: application to the (+)-pretazettine core.

    PubMed

    Ebrahimian, G Reza; du Jourdin, Xavier Mollat; Fuchs, Philip L

    2012-05-18

    A Jacobsen epoxidation protocol using H2O2 as oxidant was designed for the large-scale preparation of various epoxy vinyl sulfones. A number of cocatalysts were screened, and pH control led to increased reaction rate, higher turnover number, and improved reliability.

  15. Research And Education in Management of Large-Scale Technical Programs. Final Report.

    ERIC Educational Resources Information Center

    Hagerty, W. W.; And Others

    The National Aeronautics and Space Administration, in conjunction with Drexel University, engaged in a research effort directed toward an improved understanding of large-scale systems technology and management. This research program has as its major objectives: (1) the demonstration of the applicability of the NASA organization and management…

  16. Planning for Large Scale Habitat Restoration in the Socorro Valley, New Mexico

    Treesearch

    Gina Dello Russo; Yasmeen Najmi

    2006-01-01

    One initiative for large scale habitat restoration on the Rio Grande in central New Mexico is being led by a nonprofit organization, the Save Our Bosque Task Force. The Task Force has just completed a conceptual restoration plan for a 72-kilometer reach of river. The goals of the plan were to determine the potential for enhanced biological diversity through improved...

  17. Reflections on the Increasing Relevance of Large-Scale Professional Development

    ERIC Educational Resources Information Center

    Krainer, Konrad

    2015-01-01

    This paper focuses on commonalities and differences of three approaches to large-scale professional development (PD) in mathematics education, based on two studies from Germany and one from the United States of America. All three initiatives break new ground in improving PD targeted at educating "multipliers", and in all three cases…

  18. Seismic safety in conducting large-scale blasts

    NASA Astrophysics Data System (ADS)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  19. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  20. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular int