Controlled vocabularies and ontologies in proteomics: Overview, principles and practice☆
Mayer, Gerhard; Jones, Andrew R.; Binz, Pierre-Alain; Deutsch, Eric W.; Orchard, Sandra; Montecchi-Palazzi, Luisa; Vizcaíno, Juan Antonio; Hermjakob, Henning; Oveillero, David; Julian, Randall; Stephan, Christian; Meyer, Helmut E.; Eisenacher, Martin
2014-01-01
This paper focuses on the use of controlled vocabularies (CVs) and ontologies especially in the area of proteomics, primarily related to the work of the Proteomics Standards Initiative (PSI). It describes the relevant proteomics standard formats and the ontologies used within them. Software and tools for working with these ontology files are also discussed. The article also examines the “mapping files” used to ensure correct controlled vocabulary terms that are placed within PSI standards and the fulfillment of the MIAPE (Minimum Information about a Proteomics Experiment) requirements. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23429179
The National Cancer Institute will hold a public pre-application webinar on Friday, December 11 at 12:00 p.m. (EST) for the Funding Opportunity Announcements (FOAs) RFA-CA-15-021 entitled “Proteome Characterization Centers for Clinical Proteomic Tumor Analysis Consortium (U24), RFA-CA-15-022 entitled “Proteogenomic Translational Research Centers for Clinical Proteomic Tumor Analysis Consortium (U01)”, and RFA-CA-15-023 entitled “Proteogenomic Data Analysis Centers for Clinical Proteomic Tumor Analysis Consortium (U24)”.
Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I
2015-11-03
We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
A tutorial for software development in quantitative proteomics using PSI standard formats☆
Gonzalez-Galarza, Faviel F.; Qi, Da; Fan, Jun; Bessant, Conrad; Jones, Andrew R.
2014-01-01
The Human Proteome Organisation — Proteomics Standards Initiative (HUPO-PSI) has been working for ten years on the development of standardised formats that facilitate data sharing and public database deposition. In this article, we review three HUPO-PSI data standards — mzML, mzIdentML and mzQuantML, which can be used to design a complete quantitative analysis pipeline in mass spectrometry (MS)-based proteomics. In this tutorial, we briefly describe the content of each data model, sufficient for bioinformaticians to devise proteomics software. We also provide guidance on the use of recently released application programming interfaces (APIs) developed in Java for each of these standards, which makes it straightforward to read and write files of any size. We have produced a set of example Java classes and a basic graphical user interface to demonstrate how to use the most important parts of the PSI standards, available from http://code.google.com/p/psi-standard-formats-tutorial. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23584085
ProCon - PROteomics CONversion tool.
Mayer, Gerhard; Stephan, Christian; Meyer, Helmut E; Kohl, Michael; Marcus, Katrin; Eisenacher, Martin
2015-11-03
With the growing amount of experimental data produced in proteomics experiments and the requirements/recommendations of journals in the proteomics field to publicly make available data described in papers, a need for long-term storage of proteomics data in public repositories arises. For such an upload one needs proteomics data in a standardized format. Therefore, it is desirable, that the proprietary vendor's software will integrate in the future such an export functionality using the standard formats for proteomics results defined by the HUPO-PSI group. Currently not all search engines and analysis tools support these standard formats. In the meantime there is a need to provide user-friendly free-to-use conversion tools that can convert the data into such standard formats in order to support wet-lab scientists in creating proteomics data files ready for upload into the public repositories. ProCon is such a conversion tool written in Java for conversion of proteomics identification data into standard formats mzIdentML and Pride XML. It allows the conversion of Sequest™/Comet .out files, of search results from the popular and often used ProteomeDiscoverer® 1.x (x=versions 1.1 to1.4) software and search results stored in the LIMS systems ProteinScape® 1.3 and 2.1 into mzIdentML and PRIDE XML. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
Perez-Riverol, Yasset; Wang, Rui; Hermjakob, Henning; Müller, Markus; Vesada, Vladimir; Vizcaíno, Juan Antonio
2014-01-01
Data processing, management and visualization are central and critical components of a state of the art high-throughput mass spectrometry (MS)-based proteomics experiment, and are often some of the most time-consuming steps, especially for labs without much bioinformatics support. The growing interest in the field of proteomics has triggered an increase in the development of new software libraries, including freely available and open-source software. From database search analysis to post-processing of the identification results, even though the objectives of these libraries and packages can vary significantly, they usually share a number of features. Common use cases include the handling of protein and peptide sequences, the parsing of results from various proteomics search engines output files, and the visualization of MS-related information (including mass spectra and chromatograms). In this review, we provide an overview of the existing software libraries, open-source frameworks and also, we give information on some of the freely available applications which make use of them. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467006
Perez-Riverol, Yasset; Wang, Rui; Hermjakob, Henning; Müller, Markus; Vesada, Vladimir; Vizcaíno, Juan Antonio
2014-01-01
Data processing, management and visualization are central and critical components of a state of the art high-throughput mass spectrometry (MS)-based proteomics experiment, and are often some of the most time-consuming steps, especially for labs without much bioinformatics support. The growing interest in the field of proteomics has triggered an increase in the development of new software libraries, including freely available and open-source software. From database search analysis to post-processing of the identification results, even though the objectives of these libraries and packages can vary significantly, they usually share a number of features. Common use cases include the handling of protein and peptide sequences, the parsing of results from various proteomics search engines output files, and the visualization of MS-related information (including mass spectra and chromatograms). In this review, we provide an overview of the existing software libraries, open-source frameworks and also, we give information on some of the freely available applications which make use of them. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.
The amino acid's backup bone - storage solutions for proteomics facilities.
Meckel, Hagen; Stephan, Christian; Bunse, Christian; Krafzik, Michael; Reher, Christopher; Kohl, Michael; Meyer, Helmut Erich; Eisenacher, Martin
2014-01-01
Proteomics methods, especially high-throughput mass spectrometry analysis have been continually developed and improved over the years. The analysis of complex biological samples produces large volumes of raw data. Data storage and recovery management pose substantial challenges to biomedical or proteomic facilities regarding backup and archiving concepts as well as hardware requirements. In this article we describe differences between the terms backup and archive with regard to manual and automatic approaches. We also introduce different storage concepts and technologies from transportable media to professional solutions such as redundant array of independent disks (RAID) systems, network attached storages (NAS) and storage area network (SAN). Moreover, we present a software solution, which we developed for the purpose of long-term preservation of large mass spectrometry raw data files on an object storage device (OSD) archiving system. Finally, advantages, disadvantages, and experiences from routine operations of the presented concepts and technologies are evaluated and discussed. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.
Shevchenko, Anna; Yang, Yimin; Knaust, Andrea; Thomas, Henrik; Jiang, Hongen; Lu, Enguo; Wang, Changsui; Shevchenko, Andrej
2014-06-13
We report on the geLC-MS/MS proteomics analysis of cereals and cereal food excavated in Subeixi cemetery (500-300BC) in Xinjiang, China. Proteomics provided direct evidence that at the Subexi sourdough bread was made from barley and broomcorn millet by leavening with a renewable starter comprising baker's yeast and lactic acid bacteria. The baking recipe and flour composition indicated that barley and millet bread belonged to the staple food already in the first millennium BC and suggested the role of Turpan basin as a major route for cultural communication between Western and Eastern Eurasia in antiquity. This article is part of a Special Issue entitled: Proteomics of non-model organisms. We demonstrate that organic residues of thousand year old foods unearthed by archeological excavations can be analyzed by geLC-MS/MS proteomics with good representation of protein source organisms and coverage of sequences of identified proteins. In-depth look into the foods proteome identifies the food type and its individual ingredients, reveals ancient food processing technologies, projects their social and economic impact and provides evidence of intercultural communication between ancient populations. Proteomics analysis of ancient organic residues is direct, quantitative and informative and therefore has the potential to develop into a valuable, generally applicable tool in archaeometry. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2013. Published by Elsevier B.V.
The National Cancer Institute will hold a public Pre-Application webinar on Wednesday, January 13, 2016 at 12:00 p.m. (EST) for the Funding Opportunity Announcement (FOA) RFA-CA-15-022 entitled “Proteogenomic Translational Research Centers for Clinical Proteomic Tumor Analysis Consortium (U01).”
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
Elucidating the fungal stress response by proteomics.
Kroll, Kristin; Pähtz, Vera; Kniemeyer, Olaf
2014-01-31
Fungal species need to cope with stress, both in the natural environment and during interaction of human- or plant pathogenic fungi with their host. Many regulatory circuits governing the fungal stress response have already been discovered. However, there are still large gaps in the knowledge concerning the changes of the proteome during adaptation to environmental stress conditions. With the application of proteomic methods, particularly 2D-gel and gel-free, LC/MS-based methods, first insights into the composition and dynamic changes of the fungal stress proteome could be obtained. Here, we review the recent proteome data generated for filamentous fungi and yeasts. This article is part of a Special Issue entitled: Trends in Microbial Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Translational plant proteomics: a perspective.
Agrawal, Ganesh Kumar; Pedreschi, Romina; Barkla, Bronwyn J; Bindschedler, Laurence Veronique; Cramer, Rainer; Sarkar, Abhijit; Renaut, Jenny; Job, Dominique; Rakwal, Randeep
2012-08-03
Translational proteomics is an emerging sub-discipline of the proteomics field in the biological sciences. Translational plant proteomics aims to integrate knowledge from basic sciences to translate it into field applications to solve issues related but not limited to the recreational and economic values of plants, food security and safety, and energy sustainability. In this review, we highlight the substantial progress reached in plant proteomics during the past decade which has paved the way for translational plant proteomics. Increasing proteomics knowledge in plants is not limited to model and non-model plants, proteogenomics, crop improvement, and food analysis, safety, and nutrition but to many more potential applications. Given the wealth of information generated and to some extent applied, there is the need for more efficient and broader channels to freely disseminate the information to the scientific community. This article is part of a Special Issue entitled: Translational Proteomics. Copyright © 2012 Elsevier B.V. All rights reserved.
Proteomics research in India: an update.
Reddy, Panga Jaipal; Atak, Apurva; Ghantasala, Saicharan; Kumar, Saurabh; Gupta, Shabarni; Prasad, T S Keshava; Zingde, Surekha M; Srivastava, Sanjeeva
2015-09-08
After a successful completion of the Human Genome Project, deciphering the mystery surrounding the human proteome posed a major challenge. Despite not being largely involved in the Human Genome Project, the Indian scientific community contributed towards proteomic research along with the global community. Currently, more than 76 research/academic institutes and nearly 145 research labs are involved in core proteomic research across India. The Indian researchers have been major contributors in drafting the "human proteome map" along with international efforts. In addition to this, virtual proteomics labs, proteomics courses and remote triggered proteomics labs have helped to overcome the limitations of proteomics education posed due to expensive lab infrastructure. The establishment of Proteomics Society, India (PSI) has created a platform for the Indian proteomic researchers to share ideas, research collaborations and conduct annual conferences and workshops. Indian proteomic research is really moving forward with the global proteomics community in a quest to solve the mysteries of proteomics. A draft map of the human proteome enhances the enthusiasm among intellectuals to promote proteomic research in India to the world.This article is part of a Special Issue entitled: Proteomics in India. Copyright © 2015 Elsevier B.V. All rights reserved.
An extensive library of surrogate peptides for all human proteins.
Mohammed, Yassene; Borchers, Christoph H
2015-11-03
Selecting the most appropriate surrogate peptides to represent a target protein is a major component of experimental design in Multiple Reaction Monitoring (MRM). Our software PeptidePicker with its v-score remains distinctive in its approach of integrating information about the proteins, their tryptic peptides, and the suitability of these peptides for MRM that is available online in UniProtKB, NCBI's dbSNP, ExPASy, PeptideAtlas, PRIDE, and GPMDB. The scoring algorithm reflects our "best knowledge" for selecting candidate peptides for MRM, based on the uniqueness of the peptide in the targeted proteome, its physiochemical properties, and whether it has previously been observed. Here we present an updated approach where we have already compiled a list of all possible surrogate peptides of the human proteome. Using our stringent selection criteria, the list includes 165k suitable MRM peptides covering 17k proteins of the human reviewed proteins in UniProtKB. Compared to average of 2-4min per protein for retrieving and integrating the information, the precompiled list includes all peptides available instantly. This allows a more cohesive and faster design of a multiplexed MRM experiment and provides insights into evidence for a protein's existence. We will keep this list up-to-date as proteomics data repositories continue to grow. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
A novel spectral library workflow to enhance protein identifications.
Li, Haomin; Zong, Nobel C; Liang, Xiangbo; Kim, Allen K; Choi, Jeong Ho; Deng, Ning; Zelaya, Ivette; Lam, Maggie; Duan, Huilong; Ping, Peipei
2013-04-09
The innovations in mass spectrometry-based investigations in proteome biology enable systematic characterization of molecular details in pathophysiological phenotypes. However, the process of delineating large-scale raw proteomic datasets into a biological context requires high-throughput data acquisition and processing. A spectral library search engine makes use of previously annotated experimental spectra as references for subsequent spectral analyses. This workflow delivers many advantages, including elevated analytical efficiency and specificity as well as reduced demands in computational capacity. In this study, we created a spectral matching engine to address challenges commonly associated with a library search workflow. Particularly, an improved sliding dot product algorithm, that is robust to systematic drifts of mass measurement in spectra, is introduced. Furthermore, a noise management protocol distinguishes spectra correlation attributed from noise and peptide fragments. It enables elevated separation between target spectral matches and false matches, thereby suppressing the possibility of propagating inaccurate peptide annotations from library spectra to query spectra. Moreover, preservation of original spectra also accommodates user contributions to further enhance the quality of the library. Collectively, this search engine supports reproducible data analyses using curated references, thereby broadening the accessibility of proteomics resources to biomedical investigators. This article is part of a Special Issue entitled: From protein structures to clinical applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing...) Disability before 1979; second entitlement after 1978. In this situation, we compute your second-entitlement... primary insurance amount computed for you as of the time of your second entitlement under any method for...
38 CFR 21.5138 - Computation of benefit payments and monthly rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: 38 U.S.C. 3231, 3233, 3241, 3491, 3680, 3689) (a) Computation of entitlement factor. (1) For residence training, VA will compute an entitlement factor as follows: (i) Enter the number of full months in... entitlement factor.) (2) For correspondence training, VA will compute an entitlement factor as follows: (i...
A practical data processing workflow for multi-OMICS projects.
Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin
2014-01-01
Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.
Ndimba, Bongani Kaiser; Ndimba, Roya Janeen; Johnson, T Sudhakar; Waditee-Sirisattha, Rungaroon; Baba, Masato; Sirisattha, Sophon; Shiraiwa, Yoshihiro; Agrawal, Ganesh Kumar; Rakwal, Randeep
2013-11-20
Sustainable energy is the need of the 21st century, not because of the numerous environmental and political reasons but because it is necessary to human civilization's energy future. Sustainable energy is loosely grouped into renewable energy, energy conservation, and sustainable transport disciplines. In this review, we deal with the renewable energy aspect focusing on the biomass from bioenergy crops to microalgae to produce biofuels to the utilization of high-throughput omics technologies, in particular proteomics in advancing our understanding and increasing biofuel production. We look at biofuel production by plant- and algal-based sources, and the role proteomics has played therein. This article is part of a Special Issue entitled: Translational Plant Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Current advances in esophageal cancer proteomics.
Uemura, Norihisa; Kondo, Tadashi
2015-06-01
We review the current status of proteomics for esophageal cancer (EC) from a clinician's viewpoint. The ultimate goal of cancer proteomics is the improvement of clinical outcome. The proteome as a functional translation of the genome is a straightforward representation of genomic mechanisms that trigger carcinogenesis. Cancer proteomics has identified the mechanisms of carcinogenesis and tumor progression, detected biomarker candidates for early diagnosis, and provided novel therapeutic targets for personalized treatments. Our review focuses on three major topics in EC proteomics: diagnostics, treatment, and molecular mechanisms. We discuss the major histological differences between EC types, i.e., esophageal squamous cell carcinoma and adenocarcinoma, and evaluate the clinical significance of published proteomics studies, including promising diagnostic biomarkers and novel therapeutic targets, which should be further validated prior to launching clinical trials. Multi-disciplinary collaborations between basic scientists, clinicians, and pathologists should be established for inter-institutional validation. In conclusion, EC proteomics has provided significant results, which after thorough validation, should lead to the development of novel clinical tools and improvement of the clinical outcome for esophageal cancer patients. This article is part of a Special Issue entitled: Medical Proteomics. Copyright © 2014 Elsevier B.V. All rights reserved.
Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.
2015-01-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363
Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.
Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L
2015-02-01
Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Top-down proteomics for the analysis of proteolytic events - Methods, applications and perspectives.
Tholey, Andreas; Becker, Alexander
2017-11-01
Mass spectrometry based proteomics is an indispensable tool for almost all research areas relevant for the understanding of proteolytic processing, ranging from the identification of substrates, products and cleavage sites up to the analysis of structural features influencing protease activity. The majority of methods for these studies are based on bottom-up proteomics performing analysis at peptide level. As this approach is characterized by a number of pitfalls, e.g. loss of molecular information, there is an ongoing effort to establish top-down proteomics, performing separation and MS analysis both at intact protein level. We briefly introduce major approaches of bottom-up proteomics used in the field of protease research and highlight the shortcomings of these methods. We then discuss the present state-of-the-art of top-down proteomics. Together with the discussion of known challenges we show the potential of this approach and present a number of successful applications of top-down proteomics in protease research. This article is part of a Special Issue entitled: Proteolysis as a Regulatory Event in Pathophysiology edited by Stefan Rose-John. Copyright © 2017 Elsevier B.V. All rights reserved.
Bowler, Russell P; Wendt, Chris H; Fessler, Michael B; Foster, Matthew W; Kelly, Rachel S; Lasky-Su, Jessica; Rogers, Angela J; Stringer, Kathleen A; Winston, Brent W
2017-12-01
This document presents the proceedings from the workshop entitled, "New Strategies and Challenges in Lung Proteomics and Metabolomics" held February 4th-5th, 2016, in Denver, Colorado. It was sponsored by the National Heart Lung Blood Institute, the American Thoracic Society, the Colorado Biological Mass Spectrometry Society, and National Jewish Health. The goal of this workshop was to convene, for the first time, relevant experts in lung proteomics and metabolomics to discuss and overcome specific challenges in these fields that are unique to the lung. The main objectives of this workshop were to identify, review, and/or understand: (1) emerging technologies in metabolomics and proteomics as applied to the study of the lung; (2) the unique composition and challenges of lung-specific biological specimens for metabolomic and proteomic analysis; (3) the diverse informatics approaches and databases unique to metabolomics and proteomics, with special emphasis on the lung; (4) integrative platforms across genetic and genomic databases that can be applied to lung-related metabolomic and proteomic studies; and (5) the clinical applications of proteomics and metabolomics. The major findings and conclusions of this workshop are summarized at the end of the report, and outline the progress and challenges that face these rapidly advancing fields.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing... situation, we compute your second-entitlement primary insurance amount by selecting the higher of the following: (a) New primary insurance amount. The primary insurance amount computed as of the time of your...
Birth of plant proteomics in India: a new horizon.
Narula, Kanika; Pandey, Aarti; Gayali, Saurabh; Chakraborty, Niranjan; Chakraborty, Subhra
2015-09-08
In the post-genomic era, proteomics is acknowledged as the next frontier for biological research. Although India has a long and distinguished tradition in protein research, the initiation of proteomics studies was a new horizon. Protein research witnessed enormous progress in protein separation, high-resolution refinements, biochemical identification of the proteins, protein-protein interaction, and structure-function analysis. Plant proteomics research, in India, began its journey on investigation of the proteome profiling, complexity analysis, protein trafficking, and biochemical modeling. The research article by Bhushan et al. in 2006 marked the birth of the plant proteomics research in India. Since then plant proteomics studies expanded progressively and are now being carried out in various institutions spread across the country. The compilation presented here seeks to trace the history of development in the area during the past decade based on publications till date. In this review, we emphasize on outcomes of the field providing prospects on proteomic pathway analyses. Finally, we discuss the connotation of strategies and the potential that would provide the framework of plant proteome research. The past decades have seen rapidly growing number of sequenced plant genomes and associated genomic resources. To keep pace with this increasing body of data, India is in the provisional phase of proteomics research to develop a comparative hub for plant proteomes and protein families, but it requires a strong impetus from intellectuals, entrepreneurs, and government agencies. Here, we aim to provide an overview of past, present and future of Indian plant proteomics, which would serve as an evaluation platform for those seeking to incorporate proteomics into their research programs. This article is part of a Special Issue entitled: Proteomics in India. Copyright © 2015 Elsevier B.V. All rights reserved.
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Rodrigues, Marcio L; Nakayasu, Ernesto S; Almeida, Igor C; Nimrichter, Leonardo
2014-01-31
Several microbial molecules are released to the extracellular space in vesicle-like structures. In pathogenic fungi, these molecules include pigments, polysaccharides, lipids, and proteins, which traverse the cell wall in vesicles that accumulate in the extracellular space. The diverse composition of fungal extracellular vesicles (EV) is indicative of multiple mechanisms of cellular biogenesis, a hypothesis that was supported by EV proteomic studies in a set of Saccharomyces cerevisiae strains with defects in both conventional and unconventional secretory pathways. In the human pathogens Cryptococcus neoformans, Histoplasma capsulatum, and Paracoccidioides brasiliensis, extracellular vesicle proteomics revealed the presence of proteins with both immunological and pathogenic activities. In fact, fungal EV have been demonstrated to interfere with the activity of immune effector cells and to increase fungal pathogenesis. In this review, we discuss the impact of proteomics on the understanding of functions and biogenesis of fungal EV, as well as the potential role of these structures in fungal pathogenesis. This article is part of a Special Issue entitled: Trends in Microbial Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Proteomics boosts translational and clinical microbiology.
Del Chierico, F; Petrucca, A; Vernocchi, P; Bracaglia, G; Fiscarelli, E; Bernaschi, P; Muraca, M; Urbani, A; Putignani, L
2014-01-31
The application of proteomics to translational and clinical microbiology is one of the most advanced frontiers in the management and control of infectious diseases and in the understanding of complex microbial systems within human fluids and districts. This new approach aims at providing, by dedicated bioinformatic pipelines, a thorough description of pathogen proteomes and their interactions within the context of human host ecosystems, revolutionizing the vision of infectious diseases in biomedicine and approaching new viewpoints in both diagnostic and clinical management of the patient. Indeed, in the last few years, many laboratories have matured a series of advanced proteomic applications, aiming at providing individual proteome charts of pathogens, with respect to their morph and/or cell life stages, antimicrobial or antimycotic resistance profiling, epidemiological dispersion. Herein, we aim at reviewing the current state-of-the-art on proteomic protocols designed and set-up for translational and diagnostic microbiological purposes, from axenic pathogens' characterization to microbiota ecosystems' full description. The final goal is to describe applications of the most common MALDI-TOF MS platforms to advanced diagnostic issues related to emerging infections, increasing of fastidious bacteria, and generation of patient-tailored phylotypes. This article is part of a Special Issue entitled: Trends in Microbial Proteomics. © 2013. Published by Elsevier B.V. All rights reserved.
Diagonal chromatography to study plant protein modifications.
Walton, Alan; Tsiatsiani, Liana; Jacques, Silke; Stes, Elisabeth; Messens, Joris; Van Breusegem, Frank; Goormachtig, Sofie; Gevaert, Kris
2016-08-01
An interesting asset of diagonal chromatography, which we have introduced for contemporary proteome research, is its high versatility concerning proteomic applications. Indeed, the peptide modification or sorting step that is required between consecutive peptide separations can easily be altered and thereby allows for the enrichment of specific, though different types of peptides. Here, we focus on the application of diagonal chromatography for the study of modifications of plant proteins. In particular, we show how diagonal chromatography allows for studying proteins processed by proteases, protein ubiquitination, and the oxidation of protein-bound methionines. We discuss the actual sorting steps needed for each of these applications and the obtained results. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. Copyright © 2016 Elsevier B.V. All rights reserved.
Redox proteomics for the assessment of redox-related posttranslational regulation in plants.
Mock, Hans-Peter; Dietz, Karl-Josef
2016-08-01
The methodological developments of in vivo and in vitro protein labeling and subsequent detection enable sensitive and specific detection of redox modifications. Such methods are presently applied to diverse cells and tissues, subproteomes and developmental as well as environmental conditions. The chloroplast proteome is particularly suitable for such kind of studies, because redox regulation of chloroplast proteins is well established, many plastid proteins are abundant, redox network components have been inventoried in great depth, and functional consequences explored. Thus the repertoire of redox-related posttranslational modifications on the one hand side and their abundance on the other pose a challenge for the near future to understand their contribution to physiological regulation. The various posttranslational redox modifications are introduced, followed by a description of the available proteomics methods. The significance of the redox-related posttranslational modification is exemplarily worked out using established examples from photosynthesis. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. Copyright © 2016. Published by Elsevier B.V.
Computational clustering for viral reference proteomes
Chen, Chuming; Huang, Hongzhan; Mazumder, Raja; Natale, Darren A.; McGarvey, Peter B.; Zhang, Jian; Polson, Shawn W.; Wang, Yuqi; Wu, Cathy H.
2016-01-01
Motivation: The enormous number of redundant sequenced genomes has hindered efforts to analyze and functionally annotate proteins. As the taxonomy of viruses is not uniformly defined, viral proteomes pose special challenges in this regard. Grouping viruses based on the similarity of their proteins at proteome scale can normalize against potential taxonomic nomenclature anomalies. Results: We present Viral Reference Proteomes (Viral RPs), which are computed from complete virus proteomes within UniProtKB. Viral RPs based on 95, 75, 55, 35 and 15% co-membership in proteome similarity based clusters are provided. Comparison of our computational Viral RPs with UniProt’s curator-selected Reference Proteomes indicates that the two sets are consistent and complementary. Furthermore, each Viral RP represents a cluster of virus proteomes that was consistent with virus or host taxonomy. We provide BLASTP search and FTP download of Viral RP protein sequences, and a browser to facilitate the visualization of Viral RPs. Availability and implementation: http://proteininformationresource.org/rps/viruses/ Contact: chenc@udel.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153712
Plant fluid proteomics: Delving into the xylem sap, phloem sap and apoplastic fluid proteomes.
Rodríguez-Celma, Jorge; Ceballos-Laita, Laura; Grusak, Michael A; Abadía, Javier; López-Millán, Ana-Flor
2016-08-01
The phloem sap, xylem sap and apoplastic fluid play key roles in long and short distance transport of signals and nutrients, and act as a barrier against local and systemic pathogen infection. Among other components, these plant fluids contain proteins which are likely to be important players in their functionalities. However, detailed information about their proteomes is only starting to arise due to the difficulties inherent to the collection methods. This review compiles the proteomic information available to date in these three plant fluids, and compares the proteomes obtained in different plant species in order to shed light into conserved functions in each plant fluid. Inter-species comparisons indicate that all these fluids contain the protein machinery for self-maintenance and defense, including proteins related to cell wall metabolism, pathogen defense, proteolysis, and redox response. These analyses also revealed that proteins may play more relevant roles in signaling in the phloem sap and apoplastic fluid than in the xylem sap. A comparison of the proteomes of the three fluids indicates that although functional categories are somewhat similar, proteins involved are likely to be fluid-specific, except for a small group of proteins present in the three fluids, which may have a universal role, especially in cell wall maintenance and defense. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. Copyright © 2016 Elsevier B.V. All rights reserved.
Proteomic insights into floral biology.
Li, Xiaobai; Jackson, Aaron; Xie, Ming; Wu, Dianxing; Tsai, Wen-Chieh; Zhang, Sheng
2016-08-01
The flower is the most important biological structure for ensuring angiosperms reproductive success. Not only does the flower contain critical reproductive organs, but the wide variation in morphology, color, and scent has evolved to entice specialized pollinators, and arguably mankind in many cases, to ensure the successful propagation of its species. Recent proteomic approaches have identified protein candidates related to these flower traits, which has shed light on a number of previously unknown mechanisms underlying these traits. This review article provides a comprehensive overview of the latest advances in proteomic research in floral biology according to the order of flower structure, from corolla to male and female reproductive organs. It summarizes mainstream proteomic methods for plant research and recent improvements on two dimensional gel electrophoresis and gel-free workflows for both peptide level and protein level analysis. The recent advances in sequencing technologies provide a new paradigm for the ever-increasing genome and transcriptome information on many organisms. It is now possible to integrate genomic and transcriptomic data with proteomic results for large-scale protein characterization, so that a global understanding of the complex molecular networks in flower biology can be readily achieved. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. Copyright © 2016 Elsevier B.V. All rights reserved.
Proteomic-based comparison between populations of the Great Scallop, Pecten maximus.
Artigaud, Sébastien; Lavaud, Romain; Thébault, Julien; Jean, Fred; Strand, Oivind; Strohmeier, Tore; Milan, Massimo; Pichereau, Vianney
2014-06-13
Comparing populations residing in contrasting environments is an efficient way to decipher how organisms modulate their physiology. Here we present the proteomic signatures of two populations in a non-model marine species, the great scallop Pecten maximus, living in the northern (Hordaland, Norway) and in the center (Brest, France) of this species' latitudinal distribution range. The results showed 38 protein spots significantly differentially accumulated in mantle tissues between the two populations. We could unambiguously identify 11 of the protein spots by Maldi TOF-TOF mass spectrometry. Eight proteins corresponded to different isoforms of actin, two were identified as filamin, another protein related to the cytoskeleton structure, and one was the protease elastase. Our results suggest that scallops from the two populations assayed may modulate their cytoskeleton structures through regulation of intracellular pools of actin and filamin isoforms to better adapt to their environment. Marine mollusks are non-model organisms that have been poorly studied at the proteomic level, and this article is the first studying the great scallop (P. maximus) at this level. Furthermore, it addresses population proteomics, a new promising field, especially in environmental sciences. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2014 Elsevier B.V. All rights reserved.
A community proposal to integrate proteomics activities in ELIXIR.
Vizcaíno, Juan Antonio; Walzer, Mathias; Jiménez, Rafael C; Bittremieux, Wout; Bouyssié, David; Carapito, Christine; Corrales, Fernando; Ferro, Myriam; Heck, Albert J R; Horvatovich, Peter; Hubalek, Martin; Lane, Lydie; Laukens, Kris; Levander, Fredrik; Lisacek, Frederique; Novak, Petr; Palmblad, Magnus; Piovesan, Damiano; Pühler, Alfred; Schwämmle, Veit; Valkenborg, Dirk; van Rijswijk, Merlijn; Vondrasek, Jiri; Eisenacher, Martin; Martens, Lennart; Kohlbacher, Oliver
2017-01-01
Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on 'The Future of Proteomics in ELIXIR' that took place in March 2017 in Tübingen (Germany), and involved representatives of eleven ELIXIR nodes. These discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR's existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper.
A community proposal to integrate proteomics activities in ELIXIR
Vizcaíno, Juan Antonio; Walzer, Mathias; Jiménez, Rafael C.; Bittremieux, Wout; Bouyssié, David; Carapito, Christine; Corrales, Fernando; Ferro, Myriam; Heck, Albert J.R.; Horvatovich, Peter; Hubalek, Martin; Lane, Lydie; Laukens, Kris; Levander, Fredrik; Lisacek, Frederique; Novak, Petr; Palmblad, Magnus; Piovesan, Damiano; Pühler, Alfred; Schwämmle, Veit; Valkenborg, Dirk; van Rijswijk, Merlijn; Vondrasek, Jiri; Eisenacher, Martin; Martens, Lennart; Kohlbacher, Oliver
2017-01-01
Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on ‘The Future of Proteomics in ELIXIR’ that took place in March 2017 in Tübingen (Germany), and involved representatives of eleven ELIXIR nodes. These discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR’s existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper. PMID:28713550
Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min
2015-11-03
Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
Ceballos-Laita, Laura; Gutierrez-Carbonell, Elain; Takahashi, Daisuke; Abadía, Anunciación; Uemura, Matsuo; Abadía, Javier; López-Millán, Ana Flor
2018-04-01
This article contains consolidated proteomic data obtained from xylem sap collected from tomato plants grown in Fe- and Mn-sufficient control, as well as Fe-deficient and Mn-deficient conditions. Data presented here cover proteins identified and quantified by shotgun proteomics and Progenesis LC-MS analyses: proteins identified with at least two peptides and showing changes statistically significant (ANOVA; p ≤ 0.05) and above a biologically relevant selected threshold (fold ≥ 2) between treatments are listed. The comparison between Fe-deficient, Mn-deficient and control xylem sap samples using a multivariate statistical data analysis (Principal Component Analysis, PCA) is also included. Data included in this article are discussed in depth in the research article entitled "Effects of Fe and Mn deficiencies on the protein profiles of tomato ( Solanum lycopersicum) xylem sap as revealed by shotgun analyses" [1]. This dataset is made available to support the cited study as well to extend analyses at a later stage.
Tiberti, Natalia; Sanchez, Jean-Charles
2015-09-01
The quantitative proteomics data here reported are part of a research article entitled "Increased acute immune response during the meningo-encephalitic stage of Trypanosoma brucei rhodesiense sleeping sickness compared to Trypanosoma brucei gambiense", published by Tiberti et al., 2015. Transl. Proteomics 6, 1-9. Sleeping sickness (human African trypanosomiasis - HAT) is a deadly neglected tropical disease affecting mainly rural communities in sub-Saharan Africa. This parasitic disease is caused by the Trypanosoma brucei (T. b.) parasite, which is transmitted to the human host through the bite of the tse-tse fly. Two parasite sub-species, T. b. rhodesiense and T. b. gambiense, are responsible for two clinically different and geographically separated forms of sleeping sickness. The objective of the present study was to characterise and compare the cerebrospinal fluid (CSF) proteome of stage 2 (meningo-encephalitic stage) HAT patients suffering from T. b. gambiense or T. b. rhodesiense disease using high-throughput quantitative proteomics and the Tandem Mass Tag (TMT(®)) isobaric labelling. In order to evaluate the CSF proteome in the context of HAT pathophysiology, the protein dataset was then submitted to gene ontology and pathway analysis. Two significantly differentially expressed proteins (C-reactive protein and orosomucoid 1) were further verified on a larger population of patients (n=185) by ELISA, confirming the mass spectrometry results. By showing a predominant involvement of the acute immune response in rhodesiense HAT, the proteomics results obtained in this work will contribute to further understand the mechanisms of pathology occurring in HAT and to propose new biomarkers of potential clinical utility. The mass spectrometry raw data are available in the Pride Archive via ProteomeXchange through the identifier PXD001082.
Lim, Sanghyun; Borza, Tudor; Peters, Rick D; Coffin, Robert H; Al-Mughrabi, Khalil I; Pinto, Devanand M; Wang-Pruski, Gefu
2013-11-20
Phosphite (salts of phosphorous acid; Phi)-based fungicides are increasingly used in controlling oomycete pathogens, such as the late blight agent Phytophthora infestans. In plants, low amounts of Phi induce pathogen resistance through an indirect mode of action. We used iTRAQ-based quantitative proteomics to investigate the effects of phosphite on potato plants before and after infection with P. infestans. Ninety-three (62 up-regulated and 31 down-regulated) differentially regulated proteins, from a total of 1172 reproducibly identified proteins, were identified in the leaf proteome of Phi-treated potato plants. Four days post-inoculation with P. infestans, 16 of the 31 down-regulated proteins remained down-regulated and 42 of the 62 up-regulated proteins remained up-regulated, including 90% of the defense proteins. This group includes pathogenesis-related, stress-responsive, and detoxification-related proteins. Callose deposition and ultrastructural analyses of leaf tissues after infection were used to complement the proteomics approach. This study represents the first comprehensive proteomics analysis of the indirect mode of action of Phi, demonstrating broad effects on plant defense and plant metabolism. The proteomics data and the microscopy study suggest that Phi triggers a hypersensitive response that is responsible for induced resistance of potato leaves against P. infestans. Phosphie triggers complex functional changes in potato leaves that are responsible for the induced resistance against Phytophthora infestans. This article is part of a Special Issue entitled: Translational Plant Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Harden, Charlotte J; Perez-Carrion, Kristine; Babakordi, Zara; Plummer, Sue F; Hepburn, Natalie; Barker, Margo E; Wright, Phillip C; Evans, Caroline A; Corfe, Bernard M
2012-06-06
Current measurement of appetite depends upon tools that are either subjective (visual analogue scales), or invasive (blood). Saliva is increasingly recognised as a valuable resource for biomarker analysis. Proteomics workflows may provide alternative means for the assessment of appetitive response. The study aimed to assess the potential value of the salivary proteome to detect novel biomarkers of appetite using an iTRAQ-based workflow. Diurnal variation of salivary protein concentrations was assessed. A randomised, controlled, crossover study examined the effects on the salivary proteome of isocaloric doses of various long chain fatty acid (LCFA) oil emulsions compared to no treatment (NT). Fasted males provided saliva samples before and following NT or dosing with LCFA emulsions. The oil component of the DHA emulsion contained predominantly docosahexaenoic acid and the oil component of OA contained predominantly oleic acid. Several proteins were present in significantly (p<0.05) different quantities in saliva samples taken following treatments compared to fasting samples. DHA caused alterations in thioredoxin and serpin B4 relative to OA and NT. A further study evaluated energy intake (EI) in response to LCFA in conjunction with subjective appetite scoring. DHA was associated with significantly lower EI relative to NT and OA (p=0.039). The collective data suggest investigation of salivary proteome may be of value in appetitive response. This article is part of a Special Issue entitled: Proteomics: The clinical link. Copyright © 2011 Elsevier B.V. All rights reserved.
Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen
2014-04-04
A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be transferred to and from other computational environments for debugging or faster processing. This focus on 'on the fly' analysis sets CoreFlow apart from other workflow applications that require wrapping of scripts into particular formats and development of specific user interfaces. Importantly, current and future releases of data analysis scripts in CoreFlow format will be of widespread benefit to the proteomics community, not only for uptake and use in individual labs, but to enable full scrutiny of all analysis steps, thus increasing experimental reproducibility and decreasing errors. This article is part of a Special Issue entitled: Can Proteomics Fill the Gap Between Genomics and Phenotypes? Copyright © 2014 Elsevier B.V. All rights reserved.
This week, we are excited to announce the launch of the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) Proteogenomics Computational DREAM Challenge. The aim of this Challenge is to encourage the generation of computational methods for extracting information from the cancer proteome and for linking those data to genomic and transcriptomic information. The specific goals are to predict proteomic and phosphoproteomic data from other multiple data types including transcriptomics and genetics.
Code of Federal Regulations, 2010 CFR
2010-04-01
... simple annual interest, computed from the date on which the benefits were due. The interest shall be... payment of retroactive benefits, the beneficiary shall also be entitled to simple annual interest on such... entitled to simple annual interest computed from the date upon which the beneficiary's right to additional...
Transcriptome and proteomic analysis of mango (Mangifera indica Linn) fruits.
Wu, Hong-xia; Jia, Hui-min; Ma, Xiao-wei; Wang, Song-biao; Yao, Quan-sheng; Xu, Wen-tian; Zhou, Yi-gang; Gao, Zhong-shan; Zhan, Ru-lin
2014-06-13
Here we used Illumina RNA-seq technology for transcriptome sequencing of a mixed fruit sample from 'Zill' mango (Mangifera indica Linn) fruit pericarp and pulp during the development and ripening stages. RNA-seq generated 68,419,722 sequence reads that were assembled into 54,207 transcripts with a mean length of 858bp, including 26,413 clusters and 27,794 singletons. A total of 42,515(78.43%) transcripts were annotated using public protein databases, with a cut-off E-value above 10(-5), of which 35,198 and 14,619 transcripts were assigned to gene ontology terms and clusters of orthologous groups respectively. Functional annotation against the Kyoto Encyclopedia of Genes and Genomes database identified 23,741(43.79%) transcripts which were mapped to 128 pathways. These pathways revealed many previously unknown transcripts. We also applied mass spectrometry-based transcriptome data to characterize the proteome of ripe fruit. LC-MS/MS analysis of the mango fruit proteome was using tandem mass spectrometry (MS/MS) in an LTQ Orbitrap Velos (Thermo) coupled online to the HPLC. This approach enabled the identification of 7536 peptides that matched 2754 proteins. Our study provides a comprehensive sequence for a systemic view of transcriptome during mango fruit development and the most comprehensive fruit proteome to date, which are useful for further genomics research and proteomic studies. Our study provides a comprehensive sequence for a systemic view of both the transcriptome and proteome of mango fruit, and a valuable reference for further research on gene expression and protein identification. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2014 Elsevier B.V. All rights reserved.
Guidelines for reporting quantitative mass spectrometry based experiments in proteomics.
Martínez-Bartolomé, Salvador; Deutsch, Eric W; Binz, Pierre-Alain; Jones, Andrew R; Eisenacher, Martin; Mayer, Gerhard; Campos, Alex; Canals, Francesc; Bech-Serra, Joan-Josep; Carrascal, Montserrat; Gay, Marina; Paradela, Alberto; Navajas, Rosana; Marcilla, Miguel; Hernáez, María Luisa; Gutiérrez-Blázquez, María Dolores; Velarde, Luis Felipe Clemente; Aloria, Kerman; Beaskoetxea, Jabier; Medina-Aunon, J Alberto; Albar, Juan P
2013-12-16
Mass spectrometry is already a well-established protein identification tool and recent methodological and technological developments have also made possible the extraction of quantitative data of protein abundance in large-scale studies. Several strategies for absolute and relative quantitative proteomics and the statistical assessment of quantifications are possible, each having specific measurements and therefore, different data analysis workflows. The guidelines for Mass Spectrometry Quantification allow the description of a wide range of quantitative approaches, including labeled and label-free techniques and also targeted approaches such as Selected Reaction Monitoring (SRM). The HUPO Proteomics Standards Initiative (HUPO-PSI) has invested considerable efforts to improve the standardization of proteomics data handling, representation and sharing through the development of data standards, reporting guidelines, controlled vocabularies and tooling. In this manuscript, we describe a key output from the HUPO-PSI-namely the MIAPE Quant guidelines, which have developed in parallel with the corresponding data exchange format mzQuantML [1]. The MIAPE Quant guidelines describe the HUPO-PSI proposal concerning the minimum information to be reported when a quantitative data set, derived from mass spectrometry (MS), is submitted to a database or as supplementary information to a journal. The guidelines have been developed with input from a broad spectrum of stakeholders in the proteomics field to represent a true consensus view of the most important data types and metadata, required for a quantitative experiment to be analyzed critically or a data analysis pipeline to be reproduced. It is anticipated that they will influence or be directly adopted as part of journal guidelines for publication and by public proteomics databases and thus may have an impact on proteomics laboratories across the world. This article is part of a Special Issue entitled: Standardization and Quality Control. Copyright © 2013 Elsevier B.V. All rights reserved.
Computer applications making rapid advances in high throughput microbial proteomics (HTMP).
Anandkumar, Balakrishna; Haga, Steve W; Wu, Hui-Fen
2014-02-01
The last few decades have seen the rise of widely-available proteomics tools. From new data acquisition devices, such as MALDI-MS and 2DE to new database searching softwares, these new products have paved the way for high throughput microbial proteomics (HTMP). These tools are enabling researchers to gain new insights into microbial metabolism, and are opening up new areas of study, such as protein-protein interactions (interactomics) discovery. Computer software is a key part of these emerging fields. This current review considers: 1) software tools for identifying the proteome, such as MASCOT or PDQuest, 2) online databases of proteomes, such as SWISS-PROT, Proteome Web, or the Proteomics Facility of the Pathogen Functional Genomics Resource Center, and 3) software tools for applying proteomic data, such as PSI-BLAST or VESPA. These tools allow for research in network biology, protein identification, functional annotation, target identification/validation, protein expression, protein structural analysis, metabolic pathway engineering and drug discovery.
Ruiz-Romero, Cristina; Calamia, Valentina; Albar, Juan Pablo; Casal, José Ignacio; Corrales, Fernando J; Fernández-Puente, Patricia; Gil, Concha; Mateos, Jesús; Vivanco, Fernando; Blanco, Francisco J
2015-09-08
The Spanish Chromosome 16 consortium is integrated in the global initiative Human Proteome Project, which aims to develop an entire map of the proteins encoded following a gene-centric strategy (C-HPP) in order to make progress in the understanding of human biology in health and disease (B/D-HPP). Chromosome 16 contains many genes encoding proteins involved in the development of a broad range of diseases, which have a significant impact on the health care system. The Spanish HPP consortium has developed a B/D platform with five programs focused on selected medical areas: cancer, obesity, cardiovascular, infectious and rheumatic diseases. Each of these areas has a clinical leader associated to a proteomic investigator with the responsibility to get a comprehensive understanding of the proteins encoded by Chromosome 16 genes. Proteomics strategies have enabled great advances in the area of rheumatic diseases, particularly in osteoarthritis, with studies performed on joint cells, tissues and fluids. In this manuscript we describe how the Spanish HPP-16 consortium has developed a B/D platform with five programs focused on selected medical areas: cancer, obesity, cardiovascular, infectious and rheumatic diseases. Each of these areas has a clinical leader associated to a proteomic investigator with the responsibility to get a comprehensive understanding of the proteins encoded by Chromosome 16 genes. We show how the Proteomic strategy has enabled great advances in the area of rheumatic diseases, particularly in osteoarthritis, with studies performed on joint cells, tissues and fluids. This article is part of a Special Issue entitled: HUPO 2014. Copyright © 2015 Elsevier B.V. All rights reserved.
Marzano, Valeria; Santini, Simonetta; Rossi, Claudia; Zucchelli, Mirco; D'Alessandro, Annamaria; Marchetti, Carlo; Mingardi, Michele; Stagni, Venturina; Barilà, Daniela; Urbani, Andrea
2012-01-01
Ataxia Telangiectasia Mutated (ATM) protein kinase is a key effector in the modulation of the functionality of some important stress responses, including DNA damage and oxidative stress response, and its deficiency is the hallmark of Ataxia Telangiectasia (A-T), a rare genetic disorder. ATM modulates the activity of hundreds of target proteins, essential for the correct balance between proliferation and cell death. The aim of this study is to evaluate the phenotypic adaptation at the protein level both in basal condition and in presence of proteasome blockage in order to identify the molecules whose level and stability are modulated through ATM expression. We pursued a comparative analysis of ATM deficient and proficient lymphoblastoid cells by label-free shotgun proteomic experiments comparing the panel of proteins differentially expressed. Through a non-supervised comparative bioinformatic analysis these data provided an insight on the functional role of ATM deficiency in cellular carbohydrate metabolism's regulation. This hypothesis has been demonstrated by targeted metabolic fingerprint analysis SRM (Selected Reaction Monitoring) on specific thermodynamic checkpoints of glycolysis. This article is part of a Special Issue entitled: Translational Proteomics. PMID:22641158
Nomura, Fumio
2015-06-01
Rapid and accurate identification of microorganisms, a prerequisite for appropriate patient care and infection control, is a critical function of any clinical microbiology laboratory. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is a quick and reliable method for identification of microorganisms, including bacteria, yeast, molds, and mycobacteria. Indeed, there has been a revolutionary shift in clinical diagnostic microbiology. In the present review, the state of the art and advantages of MALDI-TOF MS-based bacterial identification are described. The potential of this innovative technology for use in strain typing and detection of antibiotic resistance is also discussed. This article is part of a Special Issue entitled: Medical Proteomics. Copyright © 2014 Elsevier B.V. All rights reserved.
Brusniak, Mi-Youn; Bodenmiller, Bernd; Campbell, David; Cooke, Kelly; Eddes, James; Garbutt, Andrew; Lau, Hollis; Letarte, Simon; Mueller, Lukas N; Sharma, Vagisha; Vitek, Olga; Zhang, Ning; Aebersold, Ruedi; Watts, Julian D
2008-01-01
Background Quantitative proteomics holds great promise for identifying proteins that are differentially abundant between populations representing different physiological or disease states. A range of computational tools is now available for both isotopically labeled and label-free liquid chromatography mass spectrometry (LC-MS) based quantitative proteomics. However, they are generally not comparable to each other in terms of functionality, user interfaces, information input/output, and do not readily facilitate appropriate statistical data analysis. These limitations, along with the array of choices, present a daunting prospect for biologists, and other researchers not trained in bioinformatics, who wish to use LC-MS-based quantitative proteomics. Results We have developed Corra, a computational framework and tools for discovery-based LC-MS proteomics. Corra extends and adapts existing algorithms used for LC-MS-based proteomics, and statistical algorithms, originally developed for microarray data analyses, appropriate for LC-MS data analysis. Corra also adapts software engineering technologies (e.g. Google Web Toolkit, distributed processing) so that computationally intense data processing and statistical analyses can run on a remote server, while the user controls and manages the process from their own computer via a simple web interface. Corra also allows the user to output significantly differentially abundant LC-MS-detected peptide features in a form compatible with subsequent sequence identification via tandem mass spectrometry (MS/MS). We present two case studies to illustrate the application of Corra to commonly performed LC-MS-based biological workflows: a pilot biomarker discovery study of glycoproteins isolated from human plasma samples relevant to type 2 diabetes, and a study in yeast to identify in vivo targets of the protein kinase Ark1 via phosphopeptide profiling. Conclusion The Corra computational framework leverages computational innovation to enable biologists or other researchers to process, analyze and visualize LC-MS data with what would otherwise be a complex and not user-friendly suite of tools. Corra enables appropriate statistical analyses, with controlled false-discovery rates, ultimately to inform subsequent targeted identification of differentially abundant peptides by MS/MS. For the user not trained in bioinformatics, Corra represents a complete, customizable, free and open source computational platform enabling LC-MS-based proteomic workflows, and as such, addresses an unmet need in the LC-MS proteomics field. PMID:19087345
Agrawal, Ganesh Kumar; Sarkar, Abhijit; Agrawal, Raj; Ndimba, Bongani Kaiser; Tanou, Georgia; Dunn, Michael J; Kieselbach, Thomas; Cramer, Rainer; Wienkoop, Stefanie; Chen, Sixue; Rafudeen, Mohammed Suhail; Deswal, Renu; Barkla, Bronwyn J; Weckwerth, Wolfram; Heazlewood, Joshua L; Renaut, Jenny; Job, Dominique; Chakraborty, Niranjan; Rakwal, Randeep
2012-02-01
The International Plant Proteomics Organization (INPPO) is a non-profit-organization consisting of people who are involved or interested in plant proteomics. INPPO is constantly growing in volume and activity, which is mostly due to the realization among plant proteomics researchers worldwide for the need of such a global platform. Their active participation resulted in the rapid growth within the first year of INPPO's official launch in 2011 via its website (www.inppo.com) and publication of the 'Viewpoint paper' in a special issue of PROTEOMICS (May 2011). Here, we will be highlighting the progress achieved in the year 2011 and the future targets for the year 2012 and onwards. INPPO has achieved a successful administrative structure, the Core Committee (CC; composed of President, Vice-President, and General Secretaries), Executive Council (EC), and General Body (GB) to achieve INPPO objectives. Various committees and subcommittees are in the process of being functionalized via discussion amongst scientists around the globe. INPPO's primary aim to popularize the plant proteomics research in biological sciences has also been recognized by PROTEOMICS where a section dedicated to plant proteomics has been introduced starting January 2012, following the very first issue of this journal devoted to plant proteomics in May 2011. To disseminate organizational activities to the scientific community, INPPO has launched a biannual (in January and July) newsletter entitled 'INPPO Express: News & Views' with the first issue published in January 2012. INPPO is also planning to have several activities in 2012, including programs within the Education Outreach committee in different countries, and the development of research ideas and proposals with priority on crop and horticultural plants, while keeping tight interactions with proteomics programs on model plants such as Arabidopsis thaliana, rice, and Medicago truncatula. Altogether, the INPPO progress and upcoming activities are because of immense support, dedication, and hard work of all members of the INPPO community, and also due to the wide encouragement and support from the communities (scientific and non-scientific). Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Targeted proteomics identifies liquid-biopsy signatures for extracapsular prostate cancer
Kim, Yunee; Jeon, Jouhyun; Mejia, Salvador; Yao, Cindy Q; Ignatchenko, Vladimir; Nyalwidhe, Julius O; Gramolini, Anthony O; Lance, Raymond S; Troyer, Dean A; Drake, Richard R; Boutros, Paul C; Semmes, O. John; Kislinger, Thomas
2016-01-01
Biomarkers are rapidly gaining importance in personalized medicine. Although numerous molecular signatures have been developed over the past decade, there is a lack of overlap and many biomarkers fail to validate in independent patient cohorts and hence are not useful for clinical application. For these reasons, identification of novel and robust biomarkers remains a formidable challenge. We combine targeted proteomics with computational biology to discover robust proteomic signatures for prostate cancer. Quantitative proteomics conducted in expressed prostatic secretions from men with extraprostatic and organ-confined prostate cancers identified 133 differentially expressed proteins. Using synthetic peptides, we evaluate them by targeted proteomics in a 74-patient cohort of expressed prostatic secretions in urine. We quantify a panel of 34 candidates in an independent 207-patient cohort. We apply machine-learning approaches to develop clinical predictive models for prostate cancer diagnosis and prognosis. Our results demonstrate that computationally guided proteomics can discover highly accurate non-invasive biomarkers. PMID:27350604
Computational Omics Pre-Awardees | Office of Cancer Clinical Proteomics Research
The National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the pre-awardees of the Computational Omics solicitation. Working with NVIDIA Foundation's Compute the Cure initiative and Leidos Biomedical Research Inc., the NCI, through this solicitation, seeks to leverage computational efforts to provide tools for the mining and interpretation of large-scale publicly available ‘omics’ datasets.
38 CFR 3.812 - Special allowance payable under section 156 of Pub. L. 97-377.
Code of Federal Regulations, 2014 CFR
2014-07-01
... consideration, and death on active duty subsequent to August 12, 1981, is qualifying provided that the death... payment rate—(1) Basic entitlement rate. A basic entitlement rate will be computed for each eligible...-377 using data to be provided by the Social Security Administration. This basic entitlement rate will...
38 CFR 3.812 - Special allowance payable under section 156 of Pub. L. 97-377.
Code of Federal Regulations, 2010 CFR
2010-07-01
... consideration, and death on active duty subsequent to August 12, 1981, is qualifying provided that the death... payment rate—(1) Basic entitlement rate. A basic entitlement rate will be computed for each eligible...-377 using data to be provided by the Social Security Administration. This basic entitlement rate will...
38 CFR 3.812 - Special allowance payable under section 156 of Pub. L. 97-377.
Code of Federal Regulations, 2013 CFR
2013-07-01
... consideration, and death on active duty subsequent to August 12, 1981, is qualifying provided that the death... payment rate—(1) Basic entitlement rate. A basic entitlement rate will be computed for each eligible...-377 using data to be provided by the Social Security Administration. This basic entitlement rate will...
38 CFR 3.812 - Special allowance payable under section 156 of Pub. L. 97-377.
Code of Federal Regulations, 2012 CFR
2012-07-01
... consideration, and death on active duty subsequent to August 12, 1981, is qualifying provided that the death... payment rate—(1) Basic entitlement rate. A basic entitlement rate will be computed for each eligible...-377 using data to be provided by the Social Security Administration. This basic entitlement rate will...
38 CFR 3.812 - Special allowance payable under section 156 of Pub. L. 97-377.
Code of Federal Regulations, 2011 CFR
2011-07-01
... consideration, and death on active duty subsequent to August 12, 1981, is qualifying provided that the death... payment rate—(1) Basic entitlement rate. A basic entitlement rate will be computed for each eligible...-377 using data to be provided by the Social Security Administration. This basic entitlement rate will...
The wheat chloroplastic proteome.
Kamal, Abu Hena Mostafa; Cho, Kun; Choi, Jong-Soon; Bae, Kwang-Hee; Komatsu, Setsuko; Uozumi, Nobuyuki; Woo, Sun Hee
2013-11-20
With the availability of plant genome sequencing, analysis of plant proteins with mass spectrometry has become promising and admired. Determining the proteome of a cell is still a challenging assignment, which is convoluted by proteome dynamics and convolution. Chloroplast is fastidious curiosity for plant biologists due to their intricate biochemical pathways for indispensable metabolite functions. In this review, an overview on proteomic studies conducted in wheat with a special focus on subcellular proteomics of chloroplast, salt and water stress. In recent years, we and other groups have attempted to understand the photosynthesis in wheat and abiotic stress under salt imposed and water deficit during vegetative stage. Those studies provide interesting results leading to better understanding of the photosynthesis and identifying the stress-responsive proteins. Indeed, recent studies aimed at resolving the photosynthesis pathway in wheat. Proteomic analysis combining two complementary approaches such as 2-DE and shotgun methods couple to high through put mass spectrometry (LTQ-FTICR and MALDI-TOF/TOF) in order to better understand the responsible proteins in photosynthesis and abiotic stress (salt and water) in wheat chloroplast will be focused. In this review we discussed the identification of the most abundant protein in wheat chloroplast and stress-responsive under salt and water stress in chloroplast of wheat seedlings, thus providing the proteomic view of the events during the development of this seedling under stress conditions. Chloroplast is fastidious curiosity for plant biologists due to their intricate biochemical pathways for indispensable metabolite functions. An overview on proteomic studies conducted in wheat with a special focus on subcellular proteomics of chloroplast, salt and water stress. We have attempted to understand the photosynthesis in wheat and abiotic stress under salt imposed and water deficit during seedling stage. Those studies provide interesting results leading to a better understanding of the photosynthesis and identifying the stress-responsive proteins. In reality, our studies aspired at resolving the photosynthesis pathway in wheat. Proteomic analysis united two complementary approaches such as Tricine SDS-PAGE and 2-DE methods couple to high through put mass spectrometry (LTQ-FTICR and MALDI-TOF/TOF) in order to better understand the responsible proteins in photosynthesis and abiotic stress (salt and water) in wheat chloroplast will be highlighted. This article is part of a Special Issue entitled: Translational Plant Proteomics. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
The November 1, 2017 issue of Cancer Research is dedicated to a collection of computational resource papers in genomics, proteomics, animal models, imaging, and clinical subjects for non-bioinformaticists looking to incorporate computing tools into their work. Scientists at Pacific Northwest National Laboratory have developed P-MartCancer, an open, web-based interactive software tool that enables statistical analyses of peptide or protein data generated from mass-spectrometry (MS)-based global proteomics experiments.
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.
2016-01-01
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...
2016-11-18
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Simulation of two dimensional electrophoresis and tandem mass spectrometry for teaching proteomics.
Fisher, Amanda; Sekera, Emily; Payne, Jill; Craig, Paul
2012-01-01
In proteomics, complex mixtures of proteins are separated (usually by chromatography or electrophoresis) and identified by mass spectrometry. We have created 2DE Tandem MS, a computer program designed for use in the biochemistry, proteomics, or bioinformatics classroom. It contains two simulations-2D electrophoresis and tandem mass spectrometry. The two simulations are integrated together and are designed to teach the concept of proteome analysis of prokaryotic and eukaryotic organisms. 2DE-Tandem MS can be used as a freestanding simulation, or in conjunction with a wet lab, to introduce proteomics in the undergraduate classroom. 2DE Tandem MS is a free program available on Sourceforge at https://sourceforge.net/projects/jbf/. It was developed using Java Swing and functions in Mac OSX, Windows, and Linux, ensuring that every student sees a consistent and informative graphical user interface no matter the computer platform they choose. Java must be installed on the host computer to run 2DE Tandem MS. Example classroom exercises are provided in the Supporting Information. Copyright © 2012 Wiley Periodicals, Inc.
An introduction to statistical process control in research proteomics.
Bramwell, David
2013-12-16
Statistical process control is a well-established and respected method which provides a general purpose, and consistent framework for monitoring and improving the quality of a process. It is routinely used in many industries where the quality of final products is critical and is often required in clinical diagnostic laboratories [1,2]. To date, the methodology has been little utilised in research proteomics. It has been shown to be capable of delivering quantitative QC procedures for qualitative clinical assays [3] making it an ideal methodology to apply to this area of biological research. To introduce statistical process control as an objective strategy for quality control and show how it could be used to benefit proteomics researchers and enhance the quality of the results they generate. We demonstrate that rules which provide basic quality control are easy to derive and implement and could have a major impact on data quality for many studies. Statistical process control is a powerful tool for investigating and improving proteomics research work-flows. The process of characterising measurement systems and defining control rules forces the exploration of key questions that can lead to significant improvements in performance. This work asserts that QC is essential to proteomics discovery experiments. Every experimenter must know the current capabilities of their measurement system and have an objective means for tracking and ensuring that performance. Proteomic analysis work-flows are complicated and multi-variate. QC is critical for clinical chemistry measurements and huge strides have been made in ensuring the quality and validity of results in clinical biochemistry labs. This work introduces some of these QC concepts and works to bridge their use from single analyte QC to applications in multi-analyte systems. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... discretion. MSHA is required to perform mathematical computations based on published cost-of-living data and... altering the budgetary impact of entitlements or the rights of entitlement recipients, or raising novel...
The Monkey King: a personal view of the long journey towards a proteomic Nirvana.
Righetti, Pier Giorgio
2014-07-31
The review covers about fifty years of progress in "proteome" analysis, starting from primitive two-dimensional (2D) map attempts in the early sixties of last century. The polar star in 2D mapping arose in 1975 with the classic paper by O'Farrell in J Biol. Chem. It became the compass for all proteome navigators. Perfection came, though, only with the introduction of immobilized pH gradients, which fixed the polypeptide spots in the 2D plane. Great impetus in proteome analysis came with the introduction of informatic tools and creating databases, among which Swiss Prot remains the site of excellence. Towards the end of the nineties, 2D chromatography, epitomized by coupling strong cation exchangers with C18 resins, began to be a serious challenge to electrophoretic 2D mapping, although up to the present both techniques are still much in vogue and appear to give complementary results. Yet the migration of "proteomics" into the third millennium was made possible only by mass spectrometry (MS), which today represents the standard analytical tool in any lab dealing with proteomic analysis. Another major improvement has been the introduction of combinatorial peptide ligand libraries (CPLL), which, when properly used, enhance the visibility of low-abundance species by 3 to 4 orders of magnitude. Coupling MS to CPLLs permits the exploration of at least 8 orders of magnitude in dynamic range on any proteome. The present review is a personal recollection highlighting the developments that led to present-day proteomics on a long march that lasted about 50years. It is meant to give to young scientists an overview on how science grows, which ones are the quantum jumps in science and which research is of particular significance in general and in the field of proteomics in particular. It also gives some real-life episodes of greater-than-life figures. As such, it can be viewed as a tutorial to stimulate the young generation to be creative (and use their imagination too!).This article is part of a Special Issue entitled: 20years of Proteomics in memory of Viatliano Pallini. Guest Editors: Luca Bini, Juan J. Calvete, Natacha Turck, Denis Hochstrasser and Jean-Charles Sanchez. Copyright © 2013 Elsevier B.V. All rights reserved.
Beyond the proteome: Mass Spectrometry Special Interest Group (MS-SIG) at ISMB/ECCB 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Soyoung; Payne, Samuel H.; Schaab, Christoph
2014-07-02
Mass spectrometry special interest group (MS-SIG) aims to bring together experts from the global research community to discuss highlights and challenges in the field of mass spectrometry (MS)-based proteomics and computational biology. The rapid echnological developments in MS-based proteomics have enabled the generation of a large amount of meaningful information on hundreds to thousands of proteins simultaneously from a biological sample; however, the complexity of the MS data require sophisticated computational algorithms and software for data analysis and interpretation. This year’s MS-SIG meeting theme was ‘Beyond the Proteome’ with major focuses on improving protein identification/quantification and using proteomics data tomore » solve interesting problems in systems biology and clinical research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
The unique peptidome: Taxon-specific tryptic peptides as biomarkers for targeted metaproteomics.
Mesuere, Bart; Van der Jeugt, Felix; Devreese, Bart; Vandamme, Peter; Dawyndt, Peter
2016-09-01
The Unique Peptide Finder (http://unipept.ugent.be/peptidefinder) is an interactive web application to quickly hunt for tryptic peptides that are unique to a particular species, genus, or any other taxon. Biodiversity within the target taxon is represented by a set of proteomes selected from a monthly updated list of complete and nonredundant UniProt proteomes, supplemented with proprietary proteomes loaded into persistent local browser storage. The software computes and visualizes pan and core peptidomes as unions and intersections of tryptic peptides occurring in the selected proteomes. In addition, it also computes and displays unique peptidomes as the set of all tryptic peptides that occur in all selected proteomes but not in any UniProt record not assigned to the target taxon. As a result, the unique peptides can serve as robust biomarkers for the target taxon, for example, in targeted metaproteomics studies. Computations are extremely fast since they are underpinned by the Unipept database, the lowest common ancestor algorithm implemented in Unipept and modern web technologies that facilitate in-browser data storage and parallel processing. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce the opening of the leaderboard to its Proteogenomics Computational DREAM Challenge. The leadership board remains open for submissions during September 25, 2017 through October 8, 2017, with the Challenge expected to run until November 17, 2017.
Simulation of Two Dimensional Electrophoresis and Tandem Mass Spectrometry for Teaching Proteomics
ERIC Educational Resources Information Center
Fisher, Amanda; Sekera, Emily; Payne, Jill; Craig, Paul
2012-01-01
In proteomics, complex mixtures of proteins are separated (usually by chromatography or electrophoresis) and identified by mass spectrometry. We have created 2DE Tandem MS, a computer program designed for use in the biochemistry, proteomics, or bioinformatics classroom. It contains two simulations--2D electrophoresis and tandem mass spectrometry.…
Bernal, Dolores; Trelis, Maria; Montaner, Sergio; Cantalapiedra, Fernando; Galiano, Alicia; Hackenberg, Michael; Marcilla, Antonio
2014-06-13
With the aim of characterizing the molecules involved in the interaction of Dicrocoelium dendriticum adults and the host, we have performed proteomic analyses of the external surface of the parasite using the currently available datasets including the transcriptome of the related species Echinostoma caproni. We have identified 182 parasite proteins on the outermost surface of D. dendriticum. The presence of exosome-like vesicles in the ESP of D. dendriticum and their components has also been characterized. Using proteomic approaches, we have characterized 84 proteins in these vesicles. Interestingly, we have detected miRNA in D. dendriticum exosomes, thus representing the first report of miRNA in helminth exosomes. In order to identify potential targets for intervention against parasitic helminths, we have analyzed the surface of the parasitic helminth Dicrocoelium dendriticum. Along with the proteomic analyses of the outermost layer of the parasite, our work describes the molecular characterization of the exosomes of D. dendriticum. Our proteomic data confirm the improvement of protein identification from "non-model organisms" like helminths, when using different search engines against a combination of available databases. In addition, this work represents the first report of miRNAs in parasitic helminth exosomes. These vesicles can pack specific proteins and RNAs providing stability and resistance to RNAse digestion in body fluids, and provide a way to regulate host-parasite interplay. The present data should provide a solid foundation for the development of novel methods to control this non-model organism and related parasites. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2014 Elsevier B.V. All rights reserved.
Wan, Cuihong; Liu, Jian; Fong, Vincent; Lugowski, Andrew; Stoilova, Snejana; Bethune-Waddell, Dylan; Borgeson, Blake; Havugimana, Pierre C; Marcotte, Edward M; Emili, Andrew
2013-04-09
The experimental isolation and characterization of stable multi-protein complexes are essential to understanding the molecular systems biology of a cell. To this end, we have developed a high-throughput proteomic platform for the systematic identification of native protein complexes based on extensive fractionation of soluble protein extracts by multi-bed ion exchange high performance liquid chromatography (IEX-HPLC) combined with exhaustive label-free LC/MS/MS shotgun profiling. To support these studies, we have built a companion data analysis software pipeline, termed ComplexQuant. Proteins present in the hundreds of fractions typically collected per experiment are first identified by exhaustively interrogating MS/MS spectra using multiple database search engines within an integrative probabilistic framework, while accounting for possible post-translation modifications. Protein abundance is then measured across the fractions based on normalized total spectral counts and precursor ion intensities using a dedicated tool, PepQuant. This analysis allows co-complex membership to be inferred based on the similarity of extracted protein co-elution profiles. Each computational step has been optimized for processing large-scale biochemical fractionation datasets, and the reliability of the integrated pipeline has been benchmarked extensively. This article is part of a Special Issue entitled: From protein structures to clinical applications. Copyright © 2012 Elsevier B.V. All rights reserved.
compomics-utilities: an open-source Java library for computational proteomics.
Barsnes, Harald; Vaudel, Marc; Colaert, Niklaas; Helsens, Kenny; Sickmann, Albert; Berven, Frode S; Martens, Lennart
2011-03-08
The growing interest in the field of proteomics has increased the demand for software tools and applications that process and analyze the resulting data. And even though the purpose of these tools can vary significantly, they usually share a basic set of features, including the handling of protein and peptide sequences, the visualization of (and interaction with) spectra and chromatograms, and the parsing of results from various proteomics search engines. Developers typically spend considerable time and effort implementing these support structures, which detracts from working on the novel aspects of their tool. In order to simplify the development of proteomics tools, we have implemented an open-source support library for computational proteomics, called compomics-utilities. The library contains a broad set of features required for reading, parsing, and analyzing proteomics data. compomics-utilities is already used by a long list of existing software, ensuring library stability and continued support and development. As a user-friendly, well-documented and open-source library, compomics-utilities greatly simplifies the implementation of the basic features needed in most proteomics tools. Implemented in 100% Java, compomics-utilities is fully portable across platforms and architectures. Our library thus allows the developers to focus on the novel aspects of their tools, rather than on the basic functions, which can contribute substantially to faster development, and better tools for proteomics.
Kumarathasan, P; Vincent, R; Das, D; Mohottalage, S; Blais, E; Blank, K; Karthikeyan, S; Vuong, N Q; Arbuckle, T E; Fraser, W D
2014-04-04
There are reports linking maternal nutritional status, smoking and environmental chemical exposures to adverse pregnancy outcomes. However, biological bases for association between some of these factors and birth outcomes are yet to be established. The objective of this preliminary work is to test the capability of a new high-throughput shotgun plasma proteomic screening in identifying maternal changes relevant to pregnancy outcome. A subset of third trimester plasma samples (N=12) associated with normal and low-birth weight infants were fractionated, tryptic-digested and analyzed for global proteomic changes using a MALDI-TOF-TOF-MS methodology. Mass spectral data were mined for candidate biomarkers using bioinformatic and statistical tools. Maternal plasma profiles of cytokines (e.g. IL8, TNF-α), chemokines (e.g. MCP-1) and cardiovascular endpoints (e.g. ET-1, MMP-9) were analyzed by a targeted approach using multiplex protein array and HPLC-Fluorescence methods. Target and global plasma proteomic markers were used to identify protein interaction networks and maternal biological pathways relevant to low infant birth weight. Our results exhibited the potential to discriminate specific maternal physiologies relevant to risk of adverse birth outcomes. This proteomic approach can be valuable in understanding the impacts of maternal factors such as environmental contaminant exposures and nutrition on birth outcomes in future work. We demonstrate here the fitness of mass spectrometry-based shot-gun proteomics for surveillance of biological changes in mothers, and for adverse pathway analysis in combination with target biomarker information. This approach has potential for enabling early detection of mothers at risk for low infant birth weight and preterm birth, and thus early intervention for mitigation and prevention of adverse pregnancy outcomes. This article is part of a Special Issue entitled: Can Proteomics Fill the Gap Between Genomics and Phenotypes? Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
20 CFR 625.13 - Restrictions on entitlement; disqualification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DISASTER UNEMPLOYMENT ASSISTANCE § 625.13 Restrictions on entitlement; disqualification. (a) Income reductions. The amount of DUA payable to an individual for a week of unemployment, as computed pursuant to... wages due to illness or disability; (2) A supplemental unemployment benefit pursuant to a collective...
20 CFR 625.13 - Restrictions on entitlement; disqualification.
Code of Federal Regulations, 2012 CFR
2012-04-01
... DISASTER UNEMPLOYMENT ASSISTANCE § 625.13 Restrictions on entitlement; disqualification. (a) Income reductions. The amount of DUA payable to an individual for a week of unemployment, as computed pursuant to... wages due to illness or disability; (2) A supplemental unemployment benefit pursuant to a collective...
20 CFR 625.13 - Restrictions on entitlement; disqualification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... DISASTER UNEMPLOYMENT ASSISTANCE § 625.13 Restrictions on entitlement; disqualification. (a) Income reductions. The amount of DUA payable to an individual for a week of unemployment, as computed pursuant to... wages due to illness or disability; (2) A supplemental unemployment benefit pursuant to a collective...
20 CFR 625.13 - Restrictions on entitlement; disqualification.
Code of Federal Regulations, 2014 CFR
2014-04-01
... DISASTER UNEMPLOYMENT ASSISTANCE § 625.13 Restrictions on entitlement; disqualification. (a) Income reductions. The amount of DUA payable to an individual for a week of unemployment, as computed pursuant to... wages due to illness or disability; (2) A supplemental unemployment benefit pursuant to a collective...
20 CFR 625.13 - Restrictions on entitlement; disqualification.
Code of Federal Regulations, 2013 CFR
2013-04-01
... DISASTER UNEMPLOYMENT ASSISTANCE § 625.13 Restrictions on entitlement; disqualification. (a) Income reductions. The amount of DUA payable to an individual for a week of unemployment, as computed pursuant to... wages due to illness or disability; (2) A supplemental unemployment benefit pursuant to a collective...
'Omics' techniques for identifying flooding-response mechanisms in soybean.
Komatsu, Setsuko; Shirasaka, Naoki; Sakata, Katsumi
2013-11-20
Plant growth and productivity are adversely influenced by various environmental stresses, which often lead to reduced seedling growth and decreased crop yields. Plants respond to stressful conditions through changes in 'omics' profiles, including transcriptomics, proteomics, and metabolomics. Linking plant phenotype to gene expression patterns, protein abundance, and metabolite accumulation is one of the main challenges for improving agricultural production. 'Omics' approaches may shed insight into the mechanisms that function in soybean in response to environmental stresses, particularly flooding by frequent rain, which occurs worldwide due to changes in global climate. Flooding causes significant reductions in the growth and yield of several crops, especially soybean. The application of 'omics' techniques may facilitate the development of flood-tolerant cultivars of soybean. In this review, the use of 'omics' techniques towards understanding the flooding-responsive mechanisms of soybeans is discussed, as the findings from these studies are expected to have applications in both breeding and agronomy. This article is part of a Special Issue entitled: Translational Plant Proteomics. Copyright © 2012 Elsevier B.V. All rights reserved.
The National Cancer Institute (NCI) Clinical Proteomic Tumor Analysis Consortium (CPTAC) is pleased to announce that teams led by Jaewoo Kang (Korea University), and Yuanfang Guan with Hongyang Li (University of Michigan) as the best performers of the NCI-CPTAC DREAM Proteogenomics Computational Challenge. Over 500 participants from 20 countries registered for the Challenge, which offered $25,000 in cash awards contributed by the NVIDIA Foundation through its Compute the Cure initiative.
Proteomic profiling in MPTP monkey model for early Parkinson disease biomarker discovery
Lin, Xiangmin; Shi, Min; Gunasingh Masilamoni, Jeyaraj; Dator, Romel; Movius, James; Aro, Patrick; Smith, Yoland; Zhang, Jing
2015-01-01
Identification of reliable and robust biomarkers is crucial to enable early diagnosis of Parkinson disease (PD) and monitoring disease progression. While imperfect, the slow, chronic 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced non-human primate animal model system of parkinsonism is an abundant source of pre-motor or early stage PD biomarker discovery. Here, we present a study of a MPTP rhesus monkey model of PD that utilizes complementary quantitative iTRAQ-based proteomic, glycoproteomics and phosphoproteomics approaches. We compared the glycoprotein, non-glycoprotein, and phosphoprotein profiles in the putamen of asymptomatic and symptomatic MPTP-treated monkeys as well as saline injected controls. We identified 86 glycoproteins, 163 non-glycoproteins, and 71 phosphoproteins differentially expressed in the MPTP-treated groups. Functional analysis of the data sets inferred the biological processes and pathways that link to neurodegeneration in PD and related disorders. Several potential biomarkers identified in this study have already been translated for their usefulness in PD diagnosis in human subjects and further validation investigations are currently under way. In addition to providing potential early PD biomarkers, this comprehensive quantitative proteomic study may also shed insights regarding the mechanisms underlying early PD development. This article is part of a Special Issue entitled: Neuroproteomics: Applications in neuroscience and neurology. PMID:25617661
Effect of posttranslational modifications on enzyme function and assembly.
Ryšlavá, Helena; Doubnerová, Veronika; Kavan, Daniel; Vaněk, Ondřej
2013-10-30
The detailed examination of enzyme molecules by mass spectrometry and other techniques continues to identify hundreds of distinct PTMs. Recently, global analyses of enzymes using methods of contemporary proteomics revealed widespread distribution of PTMs on many key enzymes distributed in all cellular compartments. Critically, patterns of multiple enzymatic and nonenzymatic PTMs within a single enzyme are now functionally evaluated providing a holistic picture of a macromolecule interacting with low molecular mass compounds, some of them being substrates, enzyme regulators, or activated precursors for enzymatic and nonenzymatic PTMs. Multiple PTMs within a single enzyme molecule and their mutual interplays are critical for the regulation of catalytic activity. Full understanding of this regulation will require detailed structural investigation of enzymes, their structural analogs, and their complexes. Further, proteomics is now integrated with molecular genetics, transcriptomics, and other areas leading to systems biology strategies. These allow the functional interrogation of complex enzymatic networks in their natural environment. In the future, one might envisage the use of robust high throughput analytical techniques that will be able to detect multiple PTMs on a global scale of individual proteomes from a number of carefully selected cells and cellular compartments. This article is part of a Special Issue entitled: Posttranslational Protein modifications in biology and Medicine. Copyright © 2013 Elsevier B.V. All rights reserved.
Alexandre, Bruno M; Charro, Nuno; Blonder, Josip; Lopes, Carlos; Azevedo, Pilar; Bugalho de Almeida, António; Chan, King C; Prieto, DaRue A; Issaq, Haleem; Veenstra, Timothy D; Penque, Deborah
2012-12-05
Structural and metabolic alterations in erythrocytes play an important role in the pathophysiology of Chronic Obstructive Pulmonary Disease (COPD). Whether these dysfunctions are related to the modulation of erythrocyte membrane proteins in patients diagnosed with COPD remains to be determined. Herein, a comparative proteomic profiling of the erythrocyte membrane fraction isolated from peripheral blood of smokers diagnosed with COPD and smokers with no COPD was performed using differential (16)O/(18)O stable isotope labeling. A total of 219 proteins were quantified as being significantly differentially expressed within the erythrocyte membrane proteomes of smokers with COPD and healthy smokers. Functional pathway analysis showed that the most enriched biofunctions were related to cell-to-cell signaling and interaction, hematological system development, immune response, oxidative stress and cytoskeleton. Chorein (VPS13A), a cytoskeleton related protein whose defects had been associated with the presence of cell membrane deformation of circulating erythrocytes was found to be down-regulated in the membrane fraction of erythrocytes obtained from COPD patients. Methemoglobin reductase (CYB5R3) was also found to be underexpressed in these cells, suggesting that COPD patients may be at higher risk for developing methemoglobinemia. This article is part of a Special Issue entitled: Integrated omics. Copyright © 2012 Elsevier B.V. All rights reserved.
Data for a proteomic analysis of p53-independent induction of apoptosis by bortezomib
Yerlikaya, Azmi; Okur, Emrah; Tarık Baykal, Ahmet; Acılan, Ceyda; Boyacı, İhsan; Ulukaya, Engin
2014-01-01
This data article contains data related to the research article entitled, “A proteomic analysis of p53-independent induction of apoptosis by bortezomib in 4T1 breast cancer cell line” by Yerlikaya et al. [1]. The research article presented 2-DE and nLC-MS/MS based proteomic analysis of proteasome inhibitor bortezomib-induced changes in the expression of cellular proteins. The report showed that GRP78 and TCEB2 were over-expressed in response to treatment with bortezomib for 24 h. In addition, the report demonstrated that Hsp70, the 26S proteasome non-ATPase regulatory subunit 14 and sequestosome 1 were increased at least 2 fold in p53-deficient 4T1 cells. The data here show for the first time the increased expressions of Card10, Dffb, Traf3 and Trp53bp2 in response to inhibition of the 26S proteasome. The information presented here also shows that both Traf1 and Xiap (a member of IAPs) are also downregulated simultaneously upon proteasomal inhibition. The increases in the level of Card10 and Trp53bp2 proteins were verified by Western blot analysis in response to varying concentrations of bortezomib for 24 h. PMID:26217687
Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon
2015-11-03
Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chin-Rang
Astronauts and workers in nuclear plants who repeatedly exposed to low doses of ionizing radiation (IR, <10 cGy) are likely to incur specific changes in signal transduction and gene expression in various tissues of their body. Remarkable advances in high throughput genomics and proteomics technologies enable researchers to broaden their focus from examining single gene/protein kinetics to better understanding global gene/protein expression profiling and biological pathway analyses, namely Systems Biology. An ultimate goal of systems biology is to develop dynamic mathematical models of interacting biological systems capable of simulating living systems in a computer. This Glue Grant is to complementmore » Dr. Boothman’s existing DOE grant (No. DE-FG02-06ER64186) entitled “The IGF1/IGF-1R-MAPK-Secretory Clusterin (sCLU) Pathway: Mediator of a Low Dose IR-Inducible Bystander Effect” to develop sensitive and quantitative proteomic technology that suitable for low dose radiobiology researches. An improved version of quantitative protein array platform utilizing linear Quantum dot signaling for systematically measuring protein levels and phosphorylation states for systems biology modeling is presented. The signals are amplified by a confocal laser Quantum dot scanner resulting in ~1000-fold more sensitivity than traditional Western blots and show the good linearity that is impossible for the signals of HRP-amplification. Therefore this improved protein array technology is suitable to detect weak responses of low dose radiation. Software is developed to facilitate the quantitative readout of signaling network activities. Kinetics of EGFRvIII mutant signaling was analyzed to quantify cross-talks between EGFR and other signaling pathways.« less
Jackson, David; Bramwell, David
2013-12-16
Proteomics technologies can be effective for the discovery and assay of protein forms altered with disease. However, few examples of successful biomarker discovery yet exist. Critical to addressing this is the widespread implementation of appropriate QC (quality control) methodology. Such QC should combine the rigour of clinical laboratory assays with a suitable treatment of the complexity of the proteome by targeting separate assignable causes of variation. We demonstrate an approach, metric and example workflow for users to develop such targeted QC rules systematically and objectively, using a publicly available plasma DIGE data set. Hierarchical clustering analysis of standard channels is first used to discover correlated groups of features corresponding to specific assignable sources of technical variation. These effects are then quantified using a statistical distance metric, and followed on control charts. This allows measurement of process drift and the detection of runs that outlie for any given effect. A known technical issue on originally rejected gels was detected validating this approach, and relevant novel effects were also detected and classified effectively. Our approach was effective for 2-DE QC. Whilst we demonstrated this in a retrospective DIGE experiment, the principles would apply to ongoing QC and other proteomic technologies. This work asserts that properly carried out QC is essential to proteomics discovery experiments. Its significance is that it provides one possible novel framework for applying such methods, with a particular consideration of how to handle the complexity of the proteome. It not only focusses on 2DE-based methodology but also demonstrates general principles. A combination of results and discussion based upon a publicly available data set is used to illustrate the approach and allows a structured discussion of factors that experimenters may wish to bear in mind in other situations. The demonstration is on retrospective data only for reasons of scope, but the principles applied are also important for ongoing QC, and this work serves as a step towards a later demonstration of that application. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. © 2013.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Current algorithmic solutions for peptide-based proteomics data generation and identification.
Hoopmann, Michael R; Moritz, Robert L
2013-02-01
Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.
Trudgian, David C; Mirzaei, Hamid
2012-12-07
We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.
Hamzeiy, Hamid; Cox, Jürgen
2017-02-01
Computational workflows for mass spectrometry-based shotgun proteomics and untargeted metabolomics share many steps. Despite the similarities, untargeted metabolomics is lagging behind in terms of reliable fully automated quantitative data analysis. We argue that metabolomics will strongly benefit from the adaptation of successful automated proteomics workflows to metabolomics. MaxQuant is a popular platform for proteomics data analysis and is widely considered to be superior in achieving high precursor mass accuracies through advanced nonlinear recalibration, usually leading to five to ten-fold better accuracy in complex LC-MS/MS runs. This translates to a sharp decrease in the number of peptide candidates per measured feature, thereby strongly improving the coverage of identified peptides. We argue that similar strategies can be applied to untargeted metabolomics, leading to equivalent improvements in metabolite identification. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
The need for agriculture phenotyping: "moving from genotype to phenotype".
Boggess, Mark V; Lippolis, John D; Hurkman, William J; Fagerquist, Clifton K; Briggs, Steve P; Gomes, Aldrin V; Righetti, Pier Giorgio; Bala, Kumar
2013-11-20
Increase in the world population has called for the increased demand for agricultural productivity. Traditional methods to augment crop and animal production are facing exacerbating pressures in keeping up with population growth. This challenge has in turn led to the transformational change in the use of biotechnology tools to meet increased productivity for both plant and animal systems. Although many challenges exist, the use of proteomic techniques to understand agricultural problems is steadily increasing. This review discusses the impact of genomics, proteomics, metabolomics and phenotypes on plant, animal and bacterial systems to achieve global food security and safety and we highlight examples of intra and extra mural research work that is currently being done to increase agricultural productivity. This review focuses on the global demand for increased agricultural productivity arising from population growth and how we can address this challenge using biotechnology. With a population well above seven billion humans, in a very unbalanced nutritional state (20% overweight, 20% risking starvation) drastic measures have to be taken at the political, infrastructure and scientific levels. While we cannot influence politics, it is our duty as scientists to see what can be done to feed humanity. Hence we highlight the transformational change in the use of biotechnology tools over traditional methods to increase agricultural productivity (plant and animal). Specifically, this review deals at length on how a three-pronged attack, namely combined genomics, proteomics and metabolomics, can help to ensure global food security and safety. This article is part of a Special Issue entitled: Translational Plant Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Design and Initial Characterization of the SC-200 Proteomics Standard Mixture
Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald
2011-01-01
Abstract High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels. PMID:21250827
Design and initial characterization of the SC-200 proteomics standard mixture.
Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald; Kolker, Eugene
2011-01-01
High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.
Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep
2013-12-16
The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
2005-01-01
proteomic gel analyses. The research group has explored the use of chemodescriptors calculated using high-level ab initio quantum chemical basis sets...descriptors that characterize the entire proteomics map, local descriptors that characterize a subset of the proteins present in the gel, and spectrum...techniques for analyzing the full set of proteins present in a proteomics map. 14. SUBJECT TERMS 1S. NUMBER OF PAGES Topological indices
Employee Spotlight: Clarence Chang | Argonne National Laboratory
batteries --Electricity transmission --Smart Grid Environment -Biology --Computational biology --Environmental biology ---Metagenomics ---Terrestrial ecology --Molecular biology ---Clinical proteomics and biomarker discovery ---Interventional biology ---Proteomics --Structural biology -Environmental science &
PatternLab for proteomics 4.0: A one-stop shop for analyzing shotgun proteomic data
Carvalho, Paulo C; Lima, Diogo B; Leprevost, Felipe V; Santos, Marlon D M; Fischer, Juliana S G; Aquino, Priscila F; Moresco, James J; Yates, John R; Barbosa, Valmir C
2017-01-01
PatternLab for proteomics is an integrated computational environment that unifies several previously published modules for analyzing shotgun proteomic data. PatternLab contains modules for formatting sequence databases, performing peptide spectrum matching, statistically filtering and organizing shotgun proteomic data, extracting quantitative information from label-free and chemically labeled data, performing statistics for differential proteomics, displaying results in a variety of graphical formats, performing similarity-driven studies with de novo sequencing data, analyzing time-course experiments, and helping with the understanding of the biological significance of data in the light of the Gene Ontology. Here we describe PatternLab for proteomics 4.0, which closely knits together all of these modules in a self-contained environment, covering the principal aspects of proteomic data analysis as a freely available and easily installable software package. All updates to PatternLab, as well as all new features added to it, have been tested over the years on millions of mass spectra. PMID:26658470
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
...] Guidances for Industry and Food and Drug Administration Staff: Computer-Assisted Detection Devices Applied... Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to... guidance, entitled ``Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device...
Protection of Computer Programs--A Dilemma.
ERIC Educational Resources Information Center
Carnahan, William H.
Computer programs, as legitimate original inventions or creative written expressions, are entitled to patent or copyright protection. Understanding the legal implications of this concept is crucial to both computer programmers and their employers in our increasingly computer-oriented way of life. Basically the copyright or patent procedure…
Toxicity of heavy metals and metal-containing nanoparticles on plants.
Mustafa, Ghazala; Komatsu, Setsuko
2016-08-01
Plants are under the continual threat of changing climatic conditions that are associated with various types of abiotic stresses. In particular, heavy metal contamination is a major environmental concern that restricts plant growth. Plants absorb heavy metals along with essential elements from the soil and have evolved different strategies to cope with the accumulation of heavy metals. The use of proteomic techniques is an effective approach to investigate and identify the biological mechanisms and pathways affected by heavy metals and metal-containing nanoparticles. The present review focuses on recent advances and summarizes the results from proteomic studies aimed at understanding the response mechanisms of plants under heavy metal and metal-containing nanoparticle stress. Transport of heavy metal ions is regulated through the cell wall and plasma membrane and then sequestered in the vacuole. In addition, the role of different metal chelators involved in the detoxification and sequestration of heavy metals is critically reviewed, and changes in protein profiles of plants exposed to metal-containing nanoparticles are discussed in detail. Finally, strategies for gaining new insights into plant tolerance mechanisms to heavy metal and metal-containing nanoparticle stress are presented. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. Copyright © 2016 Elsevier B.V. All rights reserved.
Maternal micronutrient deficiency leads to alteration in the kidney proteome in rat pups.
Ahmad, Shadab; Basak, Trayambak; Anand Kumar, K; Bhardwaj, Gourav; Lalitha, A; Yadav, Dilip K; Chandak, Giriraj Ratan; Raghunath, Manchala; Sengupta, Shantanu
2015-09-08
Maternal nutritional deficiency significantly perturbs the offspring's physiology predisposing them to metabolic diseases during adulthood. Vitamin B12 and folate are two such micronutrients, whose deficiency leads to elevated homocysteine levels. We earlier generated B12 and/or folate deficient rat models and using high-throughput proteomic approach, showed that maternal vitamin B12 deficiency modulates carbohydrate and lipid metabolism in the liver of pups through regulation of PPAR signaling pathway. In this study, using similar approach, we identified 26 differentially expressed proteins in the kidney of pups born to mothers fed with vitamin B12 deficient diet while only four proteins were identified in the folate deficient group. Importantly, proteins like calreticulin, cofilin 1 and nucleoside diphosphate kinase B that are involved in the functioning of the kidney were upregulated in B12 deficient group. Our results hint towards a larger effect of vitamin B12 deficiency compared to that of folate presumably due to greater elevation of homocysteine in vitamin B12 deficient group. In view of widespread vitamin B12 and folate deficiency and its association with several diseases like anemia, cardiovascular and renal diseases, our results may have large implications for kidney diseases in populations deficient in vitamin B12 especially in vegetarians and the elderly people.This article is part of a Special Issue entitled: Proteomics in India. Copyright © 2015 Elsevier B.V. All rights reserved.
20 CFR 228.20 - Reduction for an employee annuity.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for an employee annuity. 228.20... COMPUTATION OF SURVIVOR ANNUITIES The Tier I Annuity Component § 228.20 Reduction for an employee annuity. (a) General. If an individual is entitled to an annuity as a survivor, and is also entitled to an employee...
77 FR 67381 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
.... ``Computational and Experimental RNA Nanoparticle Design,'' in Automation in Genomics and Proteomics: An... and Experimental RNA Nanoparticle Design,'' in Automation in Genomics and Proteomics: An Engineering... Development Stage: Prototype Pre-clinical In vitro data available Inventors: Robert J. Crouch and Yutaka...
Shteynberg, David; Deutsch, Eric W.; Lam, Henry; Eng, Jimmy K.; Sun, Zhi; Tasman, Natalie; Mendoza, Luis; Moritz, Robert L.; Aebersold, Ruedi; Nesvizhskii, Alexey I.
2011-01-01
The combination of tandem mass spectrometry and sequence database searching is the method of choice for the identification of peptides and the mapping of proteomes. Over the last several years, the volume of data generated in proteomic studies has increased dramatically, which challenges the computational approaches previously developed for these data. Furthermore, a multitude of search engines have been developed that identify different, overlapping subsets of the sample peptides from a particular set of tandem mass spectrometry spectra. We present iProphet, the new addition to the widely used open-source suite of proteomic data analysis tools Trans-Proteomics Pipeline. Applied in tandem with PeptideProphet, it provides more accurate representation of the multilevel nature of shotgun proteomic data. iProphet combines the evidence from multiple identifications of the same peptide sequences across different spectra, experiments, precursor ion charge states, and modified states. It also allows accurate and effective integration of the results from multiple database search engines applied to the same data. The use of iProphet in the Trans-Proteomics Pipeline increases the number of correctly identified peptides at a constant false discovery rate as compared with both PeptideProphet and another state-of-the-art tool Percolator. As the main outcome, iProphet permits the calculation of accurate posterior probabilities and false discovery rate estimates at the level of sequence identical peptide identifications, which in turn leads to more accurate probability estimates at the protein level. Fully integrated with the Trans-Proteomics Pipeline, it supports all commonly used MS instruments, search engines, and computer platforms. The performance of iProphet is demonstrated on two publicly available data sets: data from a human whole cell lysate proteome profiling experiment representative of typical proteomic data sets, and from a set of Streptococcus pyogenes experiments more representative of organism-specific composite data sets. PMID:21876204
Han, Mee-Jung; Yun, Hongseok; Lee, Jeong Wook; Lee, Yu Hyun; Lee, Sang Yup; Yoo, Jong-Shin; Kim, Jin Young; Kim, Jihyun F; Hur, Cheol-Goo
2011-04-01
Escherichia coli K-12 and B strains have most widely been employed for scientific studies as well as industrial applications. Recently, the complete genome sequences of two representative descendants of E. coli B strains, REL606 and BL21(DE3), have been determined. Here, we report the subproteome reference maps of E. coli B REL606 by analyzing cytoplasmic, periplasmic, inner and outer membrane, and extracellular proteomes based on the genome information using experimental and computational approaches. Among the total of 3487 spots, 651 proteins including 410 non-redundant proteins were identified and characterized by 2-DE and LC-MS/MS; they include 440 cytoplasmic, 45 periplasmic, 50 inner membrane, 61 outer membrane, and 55 extracellular proteins. In addition, subcellular localizations of all 4205 ORFs of E. coli B were predicted by combined computational prediction methods. The subcellular localizations of 1812 (43.09%) proteins of currently unknown function were newly assigned. The results of computational prediction were also compared with the experimental results, showing that overall precision and recall were 92.16 and 92.16%, respectively. This work represents the most comprehensive analyses of the subproteomes of E. coli B, and will be useful as a reference for proteome profiling studies under various conditions. The complete proteome data are available online (http://ecolib.kaist.ac.kr). Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-24
... Music and Data Processing Devices, Computers, and Components Thereof; Notice of Receipt of Complaint... complaint entitled Wireless Communication Devices, Portable Music and Data Processing Devices, Computers..., portable music and data processing devices, computers, and components thereof. The complaint names as...
Method and platform standardization in MRM-based quantitative plasma proteomics.
Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H
2013-12-16
There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. © 2013.
Reis, Henning; Pütter, Carolin; Megger, Dominik A; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-C; Bertram, Stefanie; Wohlschläger, Jeremias; Hagemann, Sascha; Eisenacher, Martin; Scherag, André; Schlaak, Jörg F; Canbay, Ali; Meyer, Helmut E; Sitek, Barbara; Baba, Hideo A
2015-06-01
Hepatocellular carcinoma (HCC) is a major lethal cancer worldwide. Despite sophisticated diagnostic algorithms, the differential diagnosis of small liver nodules still is difficult. While imaging techniques have advanced, adjuvant protein-biomarkers as glypican3 (GPC3), glutamine-synthetase (GS) and heat-shock protein 70 (HSP70) have enhanced diagnostic accuracy. The aim was to further detect useful protein-biomarkers of HCC with a structured systematic approach using differential proteome techniques, bring the results to practical application and compare the diagnostic accuracy of the candidates with the established biomarkers. After label-free and gel-based proteomics (n=18 HCC/corresponding non-tumorous liver tissue (NTLT)) biomarker candidates were tested for diagnostic accuracy in immunohistochemical analyses (n=14 HCC/NTLT). Suitable candidates were further tested for consistency in comparison to known protein-biomarkers in HCC (n=78), hepatocellular adenoma (n=25; HCA), focal nodular hyperplasia (n=28; FNH) and cirrhosis (n=28). Of all protein-biomarkers, 14-3-3Sigma (14-3-3S) exhibited the most pronounced up-regulation (58.8×) in proteomics and superior diagnostic accuracy (73.0%) in the differentiation of HCC from non-tumorous hepatocytes also compared to established biomarkers as GPC3 (64.7%) and GS (45.4%). 14-3-3S was part of the best diagnostic three-biomarker panel (GPC3, HSP70, 14-3-3S) for the differentiation of HCC and HCA which is of most important significance. Exclusion of GS and inclusion of 14-3-3S in the panel (>1 marker positive) resulted in a profound increase in specificity (+44.0%) and accuracy (+11.0%) while sensitivity remained stable (96.0%). 14-3-3S is an interesting protein biomarker with the potential to further improve the accuracy of differential diagnostic process of hepatocellular tumors. This article is part of a Special Issue entitled: Medical Proteomics. Copyright © 2014 Elsevier B.V. All rights reserved.
Explorations in Space and Time: Computer-Generated Astronomy Films
ERIC Educational Resources Information Center
Meeks, M. L.
1973-01-01
Discusses the use of the computer animation technique to travel through space and time and watch models of astronomical systems in motion. Included is a list of eight computer-generated demonstration films entitled Explorations in Space and Time.'' (CC)
Lam, Maggie P Y; Scruggs, Sarah B; Kim, Tae-Young; Zong, Chenggong; Lau, Edward; Wang, Ding; Ryan, Christopher M; Faull, Kym F; Ping, Peipei
2012-08-03
The regulation of mitochondrial function is essential for cardiomyocyte adaptation to cellular stress. While it has long been understood that phosphorylation regulates flux through metabolic pathways, novel phosphorylation sites are continually being discovered in all functionally distinct areas of the mitochondrial proteome. Extracting biologically meaningful information from these phosphorylation sites requires an adaptable, sensitive, specific and robust method for their quantification. Here we report a multiple reaction monitoring-based mass spectrometric workflow for quantifying site-specific phosphorylation of mitochondrial proteins. Specifically, chromatographic and mass spectrometric conditions for 68 transitions derived from 23 murine and human phosphopeptides, and their corresponding unmodified peptides, were optimized. These methods enabled the quantification of endogenous phosphopeptides from the outer mitochondrial membrane protein VDAC, and the inner membrane proteins ANT and ETC complexes I, III and V. The development of this quantitative workflow is a pivotal step for advancing our knowledge and understanding of the regulatory effects of mitochondrial protein phosphorylation in cardiac physiology and pathophysiology. This article is part of a Special Issue entitled: Translational Proteomics. Copyright © 2012 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...
Effect of greenhouse conditions on the leaf apoplastic proteome of Coffea arabica plants.
Guerra-Guimarães, Leonor; Vieira, Ana; Chaves, Inês; Pinheiro, Carla; Queiroz, Vagner; Renaut, Jenny; Ricardo, Cândido P
2014-06-02
This work describes the coffee leaf apoplastic proteome and its modulation by the greenhouse conditions. The apoplastic fluid (APF) was obtained by leaf vacuum infiltration, and the recovered proteins were separated by 2-DE and subsequently identified by matrix assisted laser desorption/ionization time of flight-mass spectrometry, followed by homology search in EST coffee databases. Prediction tools revealed that the majority of the 195 identified proteins are involved in cell wall metabolism and in stress/defense responses. Although most of the proteins follow the classical secretory mechanism, a low percentage of them seem to result from unconventional secretion (leaderless secreted proteins). Principal components analysis revealed that the APF samples formed two distinct groups, with the temperature amplitude mostly contributing for this separation (higher or lower than 10°C, respectively). Sixty one polypeptide spots allowed defining these two groups and 28 proteins were identified, belonging to carbohydrate metabolism, cell wall modification and proteolysis. Interestingly stress/defense proteins appeared as more abundant in Group I which is associated with a higher temperature amplitude. It seems that the proteins in the coffee leaf APF might be implicated in structural modifications in the extracellular space that are crucial for plant development/adaptation to the conditions of the prevailing environment. This is the first detailed proteomic study of the coffee leaf apoplastic fluid (APF) and of its modulation by the greenhouse conditions. The comprehensive overview of the most abundant proteins present in the extra-cellular compartment is particularly important for the understanding of coffee responses to abiotic/biotic stress. This article is part of a Special Issue entitled: Environmental and structural proteomics. Copyright © 2014 Elsevier B.V. All rights reserved.
Marcon, Caroline; Lamkemeyer, Tobias; Malik, Waqas Ahmed; Ungrue, Denise; Piepho, Hans-Peter; Hochholdinger, Frank
2013-11-20
Heterosis is the superior performance of heterozygous F1-hybrid plants compared to their homozygous genetically distinct parents. Seminal roots are embryonic roots that play an important role during early maize (Zea mays L.) seedling development. In the present study the most abundant soluble proteins of 2-4cm seminal roots of the reciprocal maize F1-hybrids B73×Mo17 and Mo17×B73 and their parental inbred lines B73 and Mo17 were quantified by label-free LC-MS/MS. In total, 1918 proteins were detected by this shot-gun approach. Among those, 970 were represented by at least two peptides and were further analyzed. Eighty-five proteins displayed non-additive accumulation in at least one hybrid. The functional category protein metabolism was the most abundant class of non-additive proteins represented by 27 proteins. Within this category 16 of 17 non-additively accumulated ribosomal proteins showed high or above high parent expression in seminal roots. These results imply that an increased protein synthesis rate in hybrids might be related to the early manifestation of hybrid vigor in seminal roots. In the present study a shot-gun proteomics approach allowed for the identification of 1917 proteins and analysis of 970 seminal root proteins of maize that were represented by at least 2 peptides. The comparison of proteome complexity of reciprocal hybrids and their parental inbred lines indicates an increased protein synthesis rate in hybrids that may contribute to the early manifestation of heterosis in seminal roots. This article is part of a Special Issue entitled: Translational Plant Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Compression strength of composite primary structural components
NASA Technical Reports Server (NTRS)
Johnson, Eric R.
1993-01-01
Two projects are summarized. The first project is entitled 'Stiffener Crippling Inititated by Delaminations' and its objective is to develop a computational model of the stiffener specimens that includes the capability to predict the interlaminar stress response at the flange free edge in postbuckling. The second is entitled 'Pressure Pillowing of an Orthogonally Stiffened Cylindrical Shell'. A paper written on this project is included.
This research project combines the use of whole organism endpoints, genomic, proteomic and metabolomic approaches, and computational modeling in a systems biology approach to 1) identify molecular indicators of exposure and biomarkers of effect to EDCs representing several modes/...
Deutsch, Eric W.; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L.
2015-01-01
Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include mass spectrometry to define protein sequence, protein:protein interactions, and protein post-translational modifications. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative mass spectrometry proteomics. It supports all major operating systems and instrument vendors via open data formats. Here we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of tandem mass spectrometry datasets, as well as some major upcoming features. PMID:25631240
Deutsch, Eric W; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L
2015-08-01
Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include MS to define protein sequence, protein:protein interactions, and protein PTMs. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative MS proteomics. It supports all major operating systems and instrument vendors via open data formats. Here, we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of MS/MS datasets, as well as some major upcoming features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Single-molecule protein sequencing through fingerprinting: computational assessment
NASA Astrophysics Data System (ADS)
Yao, Yao; Docter, Margreet; van Ginkel, Jetty; de Ridder, Dick; Joo, Chirlmin
2015-10-01
Proteins are vital in all biological systems as they constitute the main structural and functional components of cells. Recent advances in mass spectrometry have brought the promise of complete proteomics by helping draft the human proteome. Yet, this commonly used protein sequencing technique has fundamental limitations in sensitivity. Here we propose a method for single-molecule (SM) protein sequencing. A major challenge lies in the fact that proteins are composed of 20 different amino acids, which demands 20 molecular reporters. We computationally demonstrate that it suffices to measure only two types of amino acids to identify proteins and suggest an experimental scheme using SM fluorescence. When achieved, this highly sensitive approach will result in a paradigm shift in proteomics, with major impact in the biological and medical sciences.
Recent developments in structural proteomics for protein structure determination.
Liu, Hsuan-Liang; Hsu, Jyh-Ping
2005-05-01
The major challenges in structural proteomics include identifying all the proteins on the genome-wide scale, determining their structure-function relationships, and outlining the precise three-dimensional structures of the proteins. Protein structures are typically determined by experimental approaches such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. However, the knowledge of three-dimensional space by these techniques is still limited. Thus, computational methods such as comparative and de novo approaches and molecular dynamic simulations are intensively used as alternative tools to predict the three-dimensional structures and dynamic behavior of proteins. This review summarizes recent developments in structural proteomics for protein structure determination; including instrumental methods such as X-ray crystallography and NMR spectroscopy, and computational methods such as comparative and de novo structure prediction and molecular dynamics simulations.
Design Process of a Goal-Based Scenario on Computing Fundamentals
ERIC Educational Resources Information Center
Beriswill, Joanne Elizabeth
2014-01-01
In this design case, an instructor developed a goal-based scenario (GBS) for undergraduate computer fundamentals students to apply their knowledge of computer equipment and software. The GBS, entitled the MegaTech Project, presented the students with descriptions of the everyday activities of four persons needing to purchase a computer system. The…
20 CFR 404.233 - Adjustment of your guaranteed alternative when you become entitled after age 62.
Code of Federal Regulations, 2010 CFR
2010-04-01
... at the time you reach age 62, we adjust the guaranteed alternative computed for you under § 404.232... amounts that go into effect in the year you reach age 62 and in years up through the year you become... since December 1978.) Example: Mr. C reaches age 62 in January 1981 and becomes entitled to old-age...
20 CFR 404.233 - Adjustment of your guaranteed alternative when you become entitled after age 62.
Code of Federal Regulations, 2011 CFR
2011-04-01
... at the time you reach age 62, we adjust the guaranteed alternative computed for you under § 404.232... amounts that go into effect in the year you reach age 62 and in years up through the year you become... since December 1978.) Example: Mr. C reaches age 62 in January 1981 and becomes entitled to old-age...
Computers in Composition Instruction.
ERIC Educational Resources Information Center
Shostak, Robert, Ed.
This volume consists of nine conference papers and journal articles concerned with microcomputer applications in the teaching of writing. After a general introduction entitled "Computer-Assisted Composition Instruction: The State of the Art," by Robert Shostak, four papers are devoted to how computers may help with the writing process. In…
Computational chemistry research
NASA Technical Reports Server (NTRS)
Levin, Eugene
1987-01-01
Task 41 is composed of two parts: (1) analysis and design studies related to the Numerical Aerodynamic Simulation (NAS) Extended Operating Configuration (EOC) and (2) computational chemistry. During the first half of 1987, Dr. Levin served as a member of an advanced system planning team to establish the requirements, goals, and principal technical characteristics of the NAS EOC. A paper entitled 'Scaling of Data Communications for an Advanced Supercomputer Network' is included. The high temperature transport properties (such as viscosity, thermal conductivity, etc.) of the major constituents of air (oxygen and nitrogen) were correctly determined. The results of prior ab initio computer solutions of the Schroedinger equation were combined with the best available experimental data to obtain complete interaction potentials for both neutral and ion-atom collision partners. These potentials were then used in a computer program to evaluate the collision cross-sections from which the transport properties could be determined. A paper entitled 'High Temperature Transport Properties of Air' is included.
Li, Jingping; Guo, Wenbin; Li, Fei; He, Jincan; Yu, Qingfeng; Wu, Xiaoqiang; Li, Jianming; Mao, Xiangming
2012-06-06
Sertoli cell only syndrome (SCOS) is one of the main causes leading to the abnormal spermatogenesis. However, the mechanisms for abnormal spermatogenesis in SCOS are still unclear. Here, we analyzed the clinical testis samples of SCOS patients by two-dimensional gel electrophoresis (2-DE) and matrix-assisted laser desorption time-of-flight mass spectrometry (MALDI-TOF/TOF MS) to find the key factors contributing to SCOS. Thirteen differential proteins were identified in clinical testis samples between normal spermatogenesis group and SCOS group. Interestingly, in these differential proteins, Heterogeneous nuclear ribonucleoprotein L(HnRNPL) was suggested as a key regulator involved in apoptosis, death and growth of spermatogenic cells by String and Pubgene bioinformatic programs. Down-regulated HnRNPL in testis samples of SCOS patients was further confirmed by immunohistochemical staining and western blotting. Moreover, in vitro and in vivo experiments demonstrated that knockdown of HnRNPL led to inhibited proliferation, increased apoptosis of spermatogenic cell but decreased apoptosis of sertoli cells. Expression of carcinoembryonic antigen-related cell adhesion molecule 1 in GC-1 cells or expression of inducible nitric oxide synthases in TM4 sertoli cells, was found to be regulated by HnRNPL. Our study first shows HnRNPL as a key factor involved in the spermatogenesis by functional proteomic studies of azoospermia patients with sertoli cell only syndrome. This article is part of a Special Issue entitled: Proteomics: The clinical link. Copyright © 2012 Elsevier B.V. All rights reserved.
The plasma membrane proteome of maize roots grown under low and high iron conditions.
Hopff, David; Wienkoop, Stefanie; Lüthje, Sabine
2013-10-08
Iron (Fe) homeostasis is essential for life and has been intensively investigated for dicots, while our knowledge for species in the Poaceae is fragmentary. This study presents the first proteome analysis (LC-MS/MS) of plasma membranes isolated from roots of 18-day old maize (Zea mays L.). Plants were grown under low and high Fe conditions in hydroponic culture. In total, 227 proteins were identified in control plants, whereas 204 proteins were identified in Fe deficient plants and 251 proteins in plants grown under high Fe conditions. Proteins were sorted by functional classes, and most of the identified proteins were classified as signaling proteins. A significant number of PM-bound redox proteins could be identified including quinone reductases, heme and copper-containing proteins. Most of these components were constitutive, and others could hint at an involvement of redox signaling and redox homeostasis by change in abundance. Energy metabolism and translation seem to be crucial in Fe homeostasis. The response to Fe deficiency includes proteins involved in development, whereas membrane remodeling and assembly and/or repair of Fe-S clusters is discussed for Fe toxicity. The general stress response appears to involve proteins related to oxidative stress, growth regulation, an increased rigidity and synthesis of cell walls and adaption of nutrient uptake and/or translocation. This article is part of a Special Issue entitled: Plant Proteomics in Europe. Copyright © 2013 Elsevier B.V. All rights reserved.
Computational approaches to protein inference in shotgun proteomics
2012-01-01
Shotgun proteomics has recently emerged as a powerful approach to characterizing proteomes in biological samples. Its overall objective is to identify the form and quantity of each protein in a high-throughput manner by coupling liquid chromatography with tandem mass spectrometry. As a consequence of its high throughput nature, shotgun proteomics faces challenges with respect to the analysis and interpretation of experimental data. Among such challenges, the identification of proteins present in a sample has been recognized as an important computational task. This task generally consists of (1) assigning experimental tandem mass spectra to peptides derived from a protein database, and (2) mapping assigned peptides to proteins and quantifying the confidence of identified proteins. Protein identification is fundamentally a statistical inference problem with a number of methods proposed to address its challenges. In this review we categorize current approaches into rule-based, combinatorial optimization and probabilistic inference techniques, and present them using integer programing and Bayesian inference frameworks. We also discuss the main challenges of protein identification and propose potential solutions with the goal of spurring innovative research in this area. PMID:23176300
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-08
... Phones and Tablet Computers, and Components Thereof; Notice of Receipt of Complaint; Solicitation of... entitled Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof... the United States after importation of certain electronic devices, including mobile phones and tablet...
The Communicative Computer Compares: A CALL Design Project for Elementary French.
ERIC Educational Resources Information Center
Kyle, Patricia J.
A computer lesson entitled "Aux Jeux Olympiques" (To the Olympic Games) simulates an ongoing situational dialog between the French student and the PLATO computer system. It offers an international setting for functional learning exercises focusing on students' understanding and use of comparative constructions, selected verbs, and other linguistic…
FunRich proteomics software analysis, let the fun begin!
Benito-Martin, Alberto; Peinado, Héctor
2015-08-01
Protein MS analysis is the preferred method for unbiased protein identification. It is normally applied to a large number of both small-scale and high-throughput studies. However, user-friendly computational tools for protein analysis are still needed. In this issue, Mathivanan and colleagues (Proteomics 2015, 15, 2597-2601) report the development of FunRich software, an open-access software that facilitates the analysis of proteomics data, providing tools for functional enrichment and interaction network analysis of genes and proteins. FunRich is a reinterpretation of proteomic software, a standalone tool combining ease of use with customizable databases, free access, and graphical representations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cohen Freue, Gabriela V.; Meredith, Anna; Smith, Derek; Bergman, Axel; Sasaki, Mayu; Lam, Karen K. Y.; Hollander, Zsuzsanna; Opushneva, Nina; Takhar, Mandeep; Lin, David; Wilson-McManus, Janet; Balshaw, Robert; Keown, Paul A.; Borchers, Christoph H.; McManus, Bruce; Ng, Raymond T.; McMaster, W. Robert
2013-01-01
Recent technical advances in the field of quantitative proteomics have stimulated a large number of biomarker discovery studies of various diseases, providing avenues for new treatments and diagnostics. However, inherent challenges have limited the successful translation of candidate biomarkers into clinical use, thus highlighting the need for a robust analytical methodology to transition from biomarker discovery to clinical implementation. We have developed an end-to-end computational proteomic pipeline for biomarkers studies. At the discovery stage, the pipeline emphasizes different aspects of experimental design, appropriate statistical methodologies, and quality assessment of results. At the validation stage, the pipeline focuses on the migration of the results to a platform appropriate for external validation, and the development of a classifier score based on corroborated protein biomarkers. At the last stage towards clinical implementation, the main aims are to develop and validate an assay suitable for clinical deployment, and to calibrate the biomarker classifier using the developed assay. The proposed pipeline was applied to a biomarker study in cardiac transplantation aimed at developing a minimally invasive clinical test to monitor acute rejection. Starting with an untargeted screening of the human plasma proteome, five candidate biomarker proteins were identified. Rejection-regulated proteins reflect cellular and humoral immune responses, acute phase inflammatory pathways, and lipid metabolism biological processes. A multiplex multiple reaction monitoring mass-spectrometry (MRM-MS) assay was developed for the five candidate biomarkers and validated by enzyme-linked immune-sorbent (ELISA) and immunonephelometric assays (INA). A classifier score based on corroborated proteins demonstrated that the developed MRM-MS assay provides an appropriate methodology for an external validation, which is still in progress. Plasma proteomic biomarkers of acute cardiac rejection may offer a relevant post-transplant monitoring tool to effectively guide clinical care. The proposed computational pipeline is highly applicable to a wide range of biomarker proteomic studies. PMID:23592955
ERIC Educational Resources Information Center
Wingersky, Marilyn S.; and others
1969-01-01
One in a series of nine articles in a section entitled, "Electronic Computer Program and Accounting Machine Procedures. Research supported in part by contract Nonr-2752(00) from the Office of Naval Research.
MzJava: An open source library for mass spectrometry data processing.
Horlacher, Oliver; Nikitin, Frederic; Alocci, Davide; Mariethoz, Julien; Müller, Markus; Lisacek, Frederique
2015-11-03
Mass spectrometry (MS) is a widely used and evolving technique for the high-throughput identification of molecules in biological samples. The need for sharing and reuse of code among bioinformaticians working with MS data prompted the design and implementation of MzJava, an open-source Java Application Programming Interface (API) for MS related data processing. MzJava provides data structures and algorithms for representing and processing mass spectra and their associated biological molecules, such as metabolites, glycans and peptides. MzJava includes functionality to perform mass calculation, peak processing (e.g. centroiding, filtering, transforming), spectrum alignment and clustering, protein digestion, fragmentation of peptides and glycans as well as scoring functions for spectrum-spectrum and peptide/glycan-spectrum matches. For data import and export MzJava implements readers and writers for commonly used data formats. For many classes support for the Hadoop MapReduce (hadoop.apache.org) and Apache Spark (spark.apache.org) frameworks for cluster computing was implemented. The library has been developed applying best practices of software engineering. To ensure that MzJava contains code that is correct and easy to use the library's API was carefully designed and thoroughly tested. MzJava is an open-source project distributed under the AGPL v3.0 licence. MzJava requires Java 1.7 or higher. Binaries, source code and documentation can be downloaded from http://mzjava.expasy.org and https://bitbucket.org/sib-pig/mzjava. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
HTAPP: High-Throughput Autonomous Proteomic Pipeline
Yu, Kebing; Salomon, Arthur R.
2011-01-01
Recent advances in the speed and sensitivity of mass spectrometers and in analytical methods, the exponential acceleration of computer processing speeds, and the availability of genomic databases from an array of species and protein information databases have led to a deluge of proteomic data. The development of a lab-based automated proteomic software platform for the automated collection, processing, storage, and visualization of expansive proteomic datasets is critically important. The high-throughput autonomous proteomic pipeline (HTAPP) described here is designed from the ground up to provide critically important flexibility for diverse proteomic workflows and to streamline the total analysis of a complex proteomic sample. This tool is comprised of software that controls the acquisition of mass spectral data along with automation of post-acquisition tasks such as peptide quantification, clustered MS/MS spectral database searching, statistical validation, and data exploration within a user-configurable lab-based relational database. The software design of HTAPP focuses on accommodating diverse workflows and providing missing software functionality to a wide range of proteomic researchers to accelerate the extraction of biological meaning from immense proteomic data sets. Although individual software modules in our integrated technology platform may have some similarities to existing tools, the true novelty of the approach described here is in the synergistic and flexible combination of these tools to provide an integrated and efficient analysis of proteomic samples. PMID:20336676
The MaxQuant computational platform for mass spectrometry-based shotgun proteomics.
Tyanova, Stefka; Temu, Tikira; Cox, Juergen
2016-12-01
MaxQuant is one of the most frequently used platforms for mass-spectrometry (MS)-based proteomics data analysis. Since its first release in 2008, it has grown substantially in functionality and can be used in conjunction with more MS platforms. Here we present an updated protocol covering the most important basic computational workflows, including those designed for quantitative label-free proteomics, MS1-level labeling and isobaric labeling techniques. This protocol presents a complete description of the parameters used in MaxQuant, as well as of the configuration options of its integrated search engine, Andromeda. This protocol update describes an adaptation of an existing protocol that substantially modifies the technique. Important concepts of shotgun proteomics and their implementation in MaxQuant are briefly reviewed, including different quantification strategies and the control of false-discovery rates (FDRs), as well as the analysis of post-translational modifications (PTMs). The MaxQuant output tables, which contain information about quantification of proteins and PTMs, are explained in detail. Furthermore, we provide a short version of the workflow that is applicable to data sets with simple and standard experimental designs. The MaxQuant algorithms are efficiently parallelized on multiple processors and scale well from desktop computers to servers with many cores. The software is written in C# and is freely available at http://www.maxquant.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Melissa M.; Brown, Joseph N.; Gritsenko, Marina A.
2013-02-01
Liquid chromatography coupled with mass spectrometry (LC-MS) is widely used to identify and quantify peptides in complex biological samples. In particular, label-free shotgun proteomics is highly effective for the identification of peptides and subsequently obtaining a global protein profile of a sample. As a result, this approach is widely used for discovery studies. Typically, the objective of these discovery studies is to identify proteins that are affected by some condition of interest (e.g. disease, exposure). However, for complex biological samples, label-free LC-MS proteomics experiments measure peptides and do not directly yield protein quantities. Thus, protein quantification must be inferred frommore » one or more measured peptides. In recent years, many computational approaches to relative protein quantification of label-free LC-MS data have been published. In this review, we examine the most commonly employed quantification approaches to relative protein abundance from peak intensity values, evaluate their individual merits, and discuss challenges in the use of the various computational approaches.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
..., or Partially-Exclusive Licensing of an Invention Concerning a Computer Controlled System for Laser... provides a computer controlled system for laser energy delivery to the retina. Information is received from... Application Serial No. 13/130,380, entitled ``Computer Controlled System for Laser Energy Delivery to the...
77 FR 34941 - Privacy Act of 1974; Notice of a Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-12
...; Notice of a Computer Matching Program AGENCY: Defense Manpower Data Center, DoD. ACTION: Notice of a... computer matching program are the Department of Veterans Affairs (VA) and the Defense Manpower Data Center... identified as DMDC 01, entitled ``Defense Manpower Data Center Data Base,'' last published in the Federal...
Abstract
The EPA sponsored a workshop held September 29-30, 2003 at the EPA in RTP that was focused on a proposal entitled "A Framework for a Computational Toxicology Research Program in ORD" (www.epa.gov/computox). Computational toxicology is a new research ini...
Rubiano-Labrador, Carolina; Bland, Céline; Miotello, Guylaine; Guérin, Philippe; Pible, Olivier; Baena, Sandra; Armengaud, Jean
2014-01-31
Tistlia consotensis is a halotolerant Rhodospirillaceae that was isolated from a saline spring located in the Colombian Andes with a salt concentration close to seawater (4.5%w/vol). We cultivated this microorganism in three NaCl concentrations, i.e. optimal (0.5%), without (0.0%) and high (4.0%) salt concentration, and analyzed its cellular proteome. For assigning tandem mass spectrometry data, we first sequenced its genome and constructed a six reading frame ORF database from the draft sequence. We annotated only the genes whose products (872) were detected. We compared the quantitative proteome data sets recorded for the three different growth conditions. At low salinity general stress proteins (chaperons, proteases and proteins associated with oxidative stress protection), were detected in higher amounts, probably linked to difficulties for proper protein folding and metabolism. Proteogenomics and comparative genomics pointed at the CrgA transcriptional regulator as a key-factor for the proteome remodeling upon low osmolarity. In hyper-osmotic condition, T. consotensis produced in larger amounts proteins involved in the sensing of changes in salt concentration, as well as a wide panel of transport systems for the transport of organic compatible solutes such as glutamate. We have described here a straightforward procedure in making a new environmental isolate quickly amenable to proteomics. The bacterium Tistlia consotensis was isolated from a saline spring in the Colombian Andes and represents an interesting environmental model to be compared with extremophiles or other moderate organisms. To explore the halotolerance molecular mechanisms of the bacterium T. consotensis, we developed an innovative proteogenomic strategy consisting of i) genome sequencing, ii) quick annotation of the genes whose products were detected by mass spectrometry, and iii) comparative proteomics of cells grown in three salt conditions. We highlighted in this manuscript how efficient such an approach can be compared to time-consuming genome annotation when pointing at the key proteins of a given biological question. We documented a large number of proteins found produced in greater amounts when cells are cultivated in either hypo-osmotic or hyper-osmotic conditions. This article is part of a Special Issue entitled: Trends in Microbial Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Martin-McCormick, Lynda; And Others
An advocacy packet on educational equity in computer education consists of five separate materials. A booklet entitled "Today's Guide to the Schools of the Future" contains four sections. The first section, a computer equity assessment guide, includes interview questions about school policies and allocation of resources, student and teacher…
Unexpected features of the dark proteome.
Perdigão, Nelson; Heinrich, Julian; Stolte, Christian; Sabir, Kenneth S; Buckley, Michael J; Tabor, Bruce; Signal, Beth; Gloss, Brian S; Hammang, Christopher J; Rost, Burkhard; Schafferhans, Andrea; O'Donoghue, Seán I
2015-12-29
We surveyed the "dark" proteome-that is, regions of proteins never observed by experimental structure determination and inaccessible to homology modeling. For 546,000 Swiss-Prot proteins, we found that 44-54% of the proteome in eukaryotes and viruses was dark, compared with only ∼14% in archaea and bacteria. Surprisingly, most of the dark proteome could not be accounted for by conventional explanations, such as intrinsic disorder or transmembrane regions. Nearly half of the dark proteome comprised dark proteins, in which the entire sequence lacked similarity to any known structure. Dark proteins fulfill a wide variety of functions, but a subset showed distinct and largely unexpected features, such as association with secretion, specific tissues, the endoplasmic reticulum, disulfide bonding, and proteolytic cleavage. Dark proteins also had short sequence length, low evolutionary reuse, and few known interactions with other proteins. These results suggest new research directions in structural and computational biology.
Accurate proteome-wide protein quantification from high-resolution 15N mass spectra
2011-01-01
In quantitative mass spectrometry-based proteomics, the metabolic incorporation of a single source of 15N-labeled nitrogen has many advantages over using stable isotope-labeled amino acids. However, the lack of a robust computational framework for analyzing the resulting spectra has impeded wide use of this approach. We have addressed this challenge by introducing a new computational methodology for analyzing 15N spectra in which quantification is integrated with identification. Application of this method to an Escherichia coli growth transition reveals significant improvement in quantification accuracy over previous methods. PMID:22182234
Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O
2015-08-25
Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.
Computational Omics Funding Opportunity | Office of Cancer Clinical Proteomics Research
The National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium (CPTAC) and the NVIDIA Foundation are pleased to announce funding opportunities in the fight against cancer. Each organization has launched a request for proposals (RFP) that will collectively fund up to $2 million to help to develop a new generation of data-intensive scientific tools to find new ways to treat cancer.
Systematic Proteomic Approach to Characterize the Impacts of ...
Chemical interactions have posed a big challenge in toxicity characterization and human health risk assessment of environmental mixtures. To characterize the impacts of chemical interactions on protein and cytotoxicity responses to environmental mixtures, we established a systems biology approach integrating proteomics, bioinformatics, statistics, and computational toxicology to measure expression or phosphorylation levels of 21 critical toxicity pathway regulators and 445 downstream proteins in human BEAS-28 cells treated with 4 concentrations of nickel, 2 concentrations each of cadmium and chromium, as well as 12 defined binary and 8 defined ternary mixtures of these metals in vitro. Multivariate statistical analysis and mathematical modeling of the metal-mediated proteomic response patterns showed a high correlation between changes in protein expression or phosphorylation and cellular toxic responses to both individual metals and metal mixtures. Of the identified correlated proteins, only a small set of proteins including HIF-1a is likely to be responsible for selective cytotoxic responses to different metals and metals mixtures. Furthermore, support vector machine learning was utilized to computationally predict protein responses to uncharacterized metal mixtures using experimentally generated protein response profiles corresponding to known metal mixtures. This study provides a novel proteomic approach for characterization and prediction of toxicities of
Mapping the Small Molecule Interactome by Mass Spectrometry.
Flaxman, Hope A; Woo, Christina M
2018-01-16
Mapping small molecule interactions throughout the proteome provides the critical structural basis for functional analysis of their impact on biochemistry. However, translation of mass spectrometry-based proteomics methods to directly profile the interaction between a small molecule and the whole proteome is challenging because of the substoichiometric nature of many interactions, the diversity of covalent and noncovalent interactions involved, and the subsequent computational complexity associated with their spectral assignment. Recent advances in chemical proteomics have begun fill this gap to provide a structural basis for the breadth of small molecule-protein interactions in the whole proteome. Innovations enabling direct characterization of the small molecule interactome include faster, more sensitive instrumentation coupled to chemical conjugation, enrichment, and labeling methods that facilitate detection and assignment. These methods have started to measure molecular interaction hotspots due to inherent differences in local amino acid reactivity and binding affinity throughout the proteome. Measurement of the small molecule interactome is producing structural insights and methods for probing and engineering protein biochemistry. Direct structural characterization of the small molecule interactome is a rapidly emerging area pushing new frontiers in biochemistry at the interface of small molecules and the proteome.
MASPECTRAS: a platform for management and analysis of proteomics LC-MS/MS data
Hartler, Jürgen; Thallinger, Gerhard G; Stocker, Gernot; Sturn, Alexander; Burkard, Thomas R; Körner, Erik; Rader, Robert; Schmidt, Andreas; Mechtler, Karl; Trajanoski, Zlatko
2007-01-01
Background The advancements of proteomics technologies have led to a rapid increase in the number, size and rate at which datasets are generated. Managing and extracting valuable information from such datasets requires the use of data management platforms and computational approaches. Results We have developed the MAss SPECTRometry Analysis System (MASPECTRAS), a platform for management and analysis of proteomics LC-MS/MS data. MASPECTRAS is based on the Proteome Experimental Data Repository (PEDRo) relational database schema and follows the guidelines of the Proteomics Standards Initiative (PSI). Analysis modules include: 1) import and parsing of the results from the search engines SEQUEST, Mascot, Spectrum Mill, X! Tandem, and OMSSA; 2) peptide validation, 3) clustering of proteins based on Markov Clustering and multiple alignments; and 4) quantification using the Automated Statistical Analysis of Protein Abundance Ratios algorithm (ASAPRatio). The system provides customizable data retrieval and visualization tools, as well as export to PRoteomics IDEntifications public repository (PRIDE). MASPECTRAS is freely available at Conclusion Given the unique features and the flexibility due to the use of standard software technology, our platform represents significant advance and could be of great interest to the proteomics community. PMID:17567892
38 CFR 3.260 - Computation of income.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Pension, Compensation, and Dependency and Indemnity Compensation Dependency, Income and Estate § 3.260 Computation of income. For entitlement to pension or dependency and indemnity compensation, income will be... is doubt as to the amount of the anticipated income, pension or dependency and indemnity compensation...
38 CFR 3.260 - Computation of income.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Pension, Compensation, and Dependency and Indemnity Compensation Dependency, Income and Estate § 3.260 Computation of income. For entitlement to pension or dependency and indemnity compensation, income will be... is doubt as to the amount of the anticipated income, pension or dependency and indemnity compensation...
38 CFR 3.260 - Computation of income.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Pension, Compensation, and Dependency and Indemnity Compensation Dependency, Income and Estate § 3.260 Computation of income. For entitlement to pension or dependency and indemnity compensation, income will be... is doubt as to the amount of the anticipated income, pension or dependency and indemnity compensation...
38 CFR 3.260 - Computation of income.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Pension, Compensation, and Dependency and Indemnity Compensation Dependency, Income and Estate § 3.260 Computation of income. For entitlement to pension or dependency and indemnity compensation, income will be... is doubt as to the amount of the anticipated income, pension or dependency and indemnity compensation...
38 CFR 3.260 - Computation of income.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Pension, Compensation, and Dependency and Indemnity Compensation Dependency, Income and Estate § 3.260 Computation of income. For entitlement to pension or dependency and indemnity compensation, income will be... is doubt as to the amount of the anticipated income, pension or dependency and indemnity compensation...
Integrating Multimedia Techniques into CS Pedagogy.
ERIC Educational Resources Information Center
Adams, Sandra Honda; Jou, Richard; Nasri, Ahmad; Radimsky, Anne-Louise; Sy, Bon K.
Through its grants, the National Science Foundation sponsors workshops that inform faculty of current topics in computer science. Such a workshop, entitled, "Developing Multimedia-based Interactive Laboratory Modules for Computer Science," was given July 27-August 6, 1998, at Illinois State University at Normal. Each participant was…
Code of Federal Regulations, 2010 CFR
2010-04-01
... included in computing an annuity under the overall minimum. A divorced spouse's inclusion in the... spouse becomes entitled to a retirement or disability benefit under the Social Security Act based upon a...
Liquid Chromatography Mass Spectrometry-Based Proteomics: Biological and Technological Aspects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya V.; Polpitiya, Ashoka D.; Anderson, Gordon A.
2010-12-01
Mass spectrometry-based proteomics has become the tool of choice for identifying and quantifying the proteome of an organism. Though recent years have seen a tremendous improvement in instrument performance and the computational tools used, significant challenges remain, and there are many opportunities for statisticians to make important contributions. In the most widely used "bottom-up" approach to proteomics, complex mixtures of proteins are first subjected to enzymatic cleavage, the resulting peptide products are separated based on chemical or physical properties and analyzed using a mass spectrometer. The two fundamental challenges in the analysis of bottom-up MS-based proteomics are: (1) Identifying themore » proteins that are present in a sample, and (2) Quantifying the abundance levels of the identified proteins. Both of these challenges require knowledge of the biological and technological context that gives rise to observed data, as well as the application of sound statistical principles for estimation and inference. We present an overview of bottom-up proteomics and outline the key statistical issues that arise in protein identification and quantification.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baulch, Janet
2013-09-11
This is a 'glue grant' that was part of a DOE Low Dose project entitled 'Identification and Characterization of Soluble Factors Involved in Delayed Effects of Low Dose Radiation'. This collaborative program has involved Drs. David L. Springer from Pacific Northwest National Laboratory (PNNL), John H. Miller from Washington State University, Tri-cities (WSU) and William F. Morgan then from the University of Maryland, Baltimore (UMB). In July 2008, Dr. Morgan moved to PNNL and Dr. Janet E. Baulch became PI for this project at University of Maryland. In November of 2008, a one year extension with no new funds wasmore » requested to complete the proteomic analyses. The project stemmed from studies in the Morgan laboratory demonstrating that genomically unstable cells secret a soluble factor or factors into the culture medium, that cause cytogenetic aberrations and apoptosis in normal parental GM10115 cells. The purpose of this project was to identify the death inducing effect (DIE) factor or factors, estimate their relative abundance, identify the cell signaling pathways involved and finally recapitulate DIE in normal cells by exogenous manipulation of putative DIE factors in culture medium. As reported in detail in the previous progress report, analysis of culture medium from the parental cell line, and stable and unstable clones demonstrated inconsistent proteomic profiles as relate to candidate DIE factors. While the proposed proteomic analyses did not provide information that would allow DIE factors to be identified, the analyses provided another important set of observations. Proteomic analysis suggested that proteins associated with the cellular response to oxidative stress and mitochondrial function were elevated in the medium from unstable clones in a manner consistent with mitochondrial dysfunction. These findings correlate with previous studies of these clones that demonstrated functional differences between the mitochondria of stable and unstable clones. These mitochondrial abnormalities in the unstable clones contributes to oxidative stress.« less
78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-19
... (Pub. L. 111- 148), as amended by the Health Care and Education Reconciliation Act of 2010 (Pub. L. 111... Entitlements Program System of Records Notice, 77 FR 47415 (August 8, 2012). Inclusive Dates of the Match: The...
Response to "Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra".
Griss, Johannes; Perez-Riverol, Yasset; The, Matthew; Käll, Lukas; Vizcaíno, Juan Antonio
2018-05-04
In the recent benchmarking article entitled "Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra", Rieder et al. compared several different approaches to cluster MS/MS spectra. While we certainly recognize the value of the manuscript, here, we report some shortcomings detected in the original analyses. For most analyses, the authors clustered only single MS/MS runs. In one of the reported analyses, three MS/MS runs were processed together, which already led to computational performance issues in many of the tested approaches. This fact highlights the difficulties of using many of the tested algorithms on the nowadays produced average proteomics data sets. Second, the authors only processed identified spectra when merging MS runs. Thereby, all unidentified spectra that are of lower quality were already removed from the data set and could not influence the clustering results. Next, we found that the authors did not analyze the effect of chimeric spectra on the clustering results. In our analysis, we found that 3% of the spectra in the used data sets were chimeric, and this had marked effects on the behavior of the different clustering algorithms tested. Finally, the authors' choice to evaluate the MS-Cluster and spectra-cluster algorithms using a precursor tolerance of 5 Da for high-resolution Orbitrap data only was, in our opinion, not adequate to assess the performance of MS/MS clustering approaches.
ERIC Educational Resources Information Center
Hasenekoglu, Ismet; Timucin, Melih
2007-01-01
The aim of this study is to collect and evaluate opinions of CAI experts and biology teachers about a high school level Computer Assisted Biology Instruction Material presenting computer-made modelling and simulations. It is a case study. A material covering "Nucleic Acids and Protein Synthesis" topic was developed as the…
ProteoSign: an end-user online differential proteomics statistical analysis platform.
Efstathiou, Georgios; Antonakis, Andreas N; Pavlopoulos, Georgios A; Theodosiou, Theodosios; Divanach, Peter; Trudgian, David C; Thomas, Benjamin; Papanikolaou, Nikolas; Aivaliotis, Michalis; Acuto, Oreste; Iliopoulos, Ioannis
2017-07-03
Profiling of proteome dynamics is crucial for understanding cellular behavior in response to intrinsic and extrinsic stimuli and maintenance of homeostasis. Over the last 20 years, mass spectrometry (MS) has emerged as the most powerful tool for large-scale identification and characterization of proteins. Bottom-up proteomics, the most common MS-based proteomics approach, has always been challenging in terms of data management, processing, analysis and visualization, with modern instruments capable of producing several gigabytes of data out of a single experiment. Here, we present ProteoSign, a freely available web application, dedicated in allowing users to perform proteomics differential expression/abundance analysis in a user-friendly and self-explanatory way. Although several non-commercial standalone tools have been developed for post-quantification statistical analysis of proteomics data, most of them are not end-user appealing as they often require very stringent installation of programming environments, third-party software packages and sometimes further scripting or computer programming. To avoid this bottleneck, we have developed a user-friendly software platform accessible via a web interface in order to enable proteomics laboratories and core facilities to statistically analyse quantitative proteomics data sets in a resource-efficient manner. ProteoSign is available at http://bioinformatics.med.uoc.gr/ProteoSign and the source code at https://github.com/yorgodillo/ProteoSign. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.
Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
2015-07-15
Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
SAFE Software and FED Database to Uncover Protein-Protein Interactions using Gene Fusion Analysis.
Tsagrasoulis, Dimosthenis; Danos, Vasilis; Kissa, Maria; Trimpalis, Philip; Koumandou, V Lila; Karagouni, Amalia D; Tsakalidis, Athanasios; Kossida, Sophia
2012-01-01
Domain Fusion Analysis takes advantage of the fact that certain proteins in a given proteome A, are found to have statistically significant similarity with two separate proteins in another proteome B. In other words, the result of a fusion event between two separate proteins in proteome B is a specific full-length protein in proteome A. In such a case, it can be safely concluded that the protein pair has a common biological function or even interacts physically. In this paper, we present the Fusion Events Database (FED), a database for the maintenance and retrieval of fusion data both in prokaryotic and eukaryotic organisms and the Software for the Analysis of Fusion Events (SAFE), a computational platform implemented for the automated detection, filtering and visualization of fusion events (both available at: http://www.bioacademy.gr/bioinformatics/projects/ProteinFusion/index.htm). Finally, we analyze the proteomes of three microorganisms using these tools in order to demonstrate their functionality.
SAFE Software and FED Database to Uncover Protein-Protein Interactions using Gene Fusion Analysis
Tsagrasoulis, Dimosthenis; Danos, Vasilis; Kissa, Maria; Trimpalis, Philip; Koumandou, V. Lila; Karagouni, Amalia D.; Tsakalidis, Athanasios; Kossida, Sophia
2012-01-01
Domain Fusion Analysis takes advantage of the fact that certain proteins in a given proteome A, are found to have statistically significant similarity with two separate proteins in another proteome B. In other words, the result of a fusion event between two separate proteins in proteome B is a specific full-length protein in proteome A. In such a case, it can be safely concluded that the protein pair has a common biological function or even interacts physically. In this paper, we present the Fusion Events Database (FED), a database for the maintenance and retrieval of fusion data both in prokaryotic and eukaryotic organisms and the Software for the Analysis of Fusion Events (SAFE), a computational platform implemented for the automated detection, filtering and visualization of fusion events (both available at: http://www.bioacademy.gr/bioinformatics/projects/ProteinFusion/index.htm). Finally, we analyze the proteomes of three microorganisms using these tools in order to demonstrate their functionality. PMID:22267904
Cloud Computing for Protein-Ligand Binding Site Comparison
2013-01-01
The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824
Cloud computing for protein-ligand binding site comparison.
Hung, Che-Lun; Hua, Guan-Jie
2013-01-01
The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery.
P-KIMMO: A Prolog Implementation of the Two Level Model.
ERIC Educational Resources Information Center
Lee, Kang-Hyuk
Implementation of a computer-based model for morphological analysis and synthesis of language, entitled P-KIMMO, is discussed. The model was implemented in Quintus Prolog on a Sun Workstation and exported to a Macintosh computer. This model has two levels of morphophonological representation, lexical and surface levels, associated by…
Web Based Parallel Programming Workshop for Undergraduate Education.
ERIC Educational Resources Information Center
Marcus, Robert L.; Robertson, Douglass
Central State University (Ohio), under a contract with Nichols Research Corporation, has developed a World Wide web based workshop on high performance computing entitled "IBN SP2 Parallel Programming Workshop." The research is part of the DoD (Department of Defense) High Performance Computing Modernization Program. The research…
78 FR 47830 - Privacy Act of 1974; Report of Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... of Veterans Affairs. ACTION: Notice of Computer Matching Program. SUMMARY: The Department of Veterans Affairs (VA) provides notice that it intends to conduct a recurring computer matching program matching... necessary information from RRB-26: Payment, Rate, and Entitlement History File, published at 75 FR 43729...
Marx, Harald; Lemeer, Simone; Schliep, Jan Erik; Matheron, Lucrece; Mohammed, Shabaz; Cox, Jürgen; Mann, Matthias; Heck, Albert J R; Kuster, Bernhard
2013-06-01
We present a peptide library and data resource of >100,000 synthetic, unmodified peptides and their phosphorylated counterparts with known sequences and phosphorylation sites. Analysis of the library by mass spectrometry yielded a data set that we used to evaluate the merits of different search engines (Mascot and Andromeda) and fragmentation methods (beam-type collision-induced dissociation (HCD) and electron transfer dissociation (ETD)) for peptide identification. We also compared the sensitivities and accuracies of phosphorylation-site localization tools (Mascot Delta Score, PTM score and phosphoRS), and we characterized the chromatographic behavior of peptides in the library. We found that HCD identified more peptides and phosphopeptides than did ETD, that phosphopeptides generally eluted later from reversed-phase columns and were easier to identify than unmodified peptides and that current computational tools for proteomics can still be substantially improved. These peptides and spectra will facilitate the development, evaluation and improvement of experimental and computational proteomic strategies, such as separation techniques and the prediction of retention times and fragmentation patterns.
Proteomics of Skeletal Muscle: Focus on Insulin Resistance and Exercise Biology
Deshmukh, Atul S.
2016-01-01
Skeletal muscle is the largest tissue in the human body and plays an important role in locomotion and whole body metabolism. It accounts for ~80% of insulin stimulated glucose disposal. Skeletal muscle insulin resistance, a primary feature of Type 2 diabetes, is caused by a decreased ability of muscle to respond to circulating insulin. Physical exercise improves insulin sensitivity and whole body metabolism and remains one of the most promising interventions for the prevention of Type 2 diabetes. Insulin resistance and exercise adaptations in skeletal muscle might be a cause, or consequence, of altered protein expressions profiles and/or their posttranslational modifications (PTMs). Mass spectrometry (MS)-based proteomics offer enormous promise for investigating the molecular mechanisms underlying skeletal muscle insulin resistance and exercise-induced adaptation; however, skeletal muscle proteomics are challenging. This review describes the technical limitations of skeletal muscle proteomics as well as emerging developments in proteomics workflow with respect to samples preparation, liquid chromatography (LC), MS and computational analysis. These technologies have not yet been fully exploited in the field of skeletal muscle proteomics. Future studies that involve state-of-the-art proteomics technology will broaden our understanding of exercise-induced adaptations as well as molecular pathogenesis of insulin resistance. This could lead to the identification of new therapeutic targets. PMID:28248217
The Pacific Northwest National Laboratory library of bacterial and archaeal proteomic biodiversity
Payne, Samuel H.; Monroe, Matthew E.; Overall, Christopher C.; ...
2015-08-18
This dataset deposition announces the submission to public repositories of the PNNL Biodiversity Library, a large collection of global proteomics data for 112 bacterial and archaeal organisms. The data comprises 35,162 tandem mass spectrometry (MS/MS) datasets from ~10 years of research. All data has been searched, annotated and organized in a consistent manner to promote reuse by the community. Protein identifications were cross-referenced with KEGG functional annotations which allows for pathway oriented investigation. We present the data as a freely available community resource. A variety of data re-use options are described for computational modeling, proteomics assay design and bioengineering. Instrumentmore » data and analysis files are available at ProteomeXchange via the MassIVE partner repository under the identifiers PXD001860 and MSV000079053.« less
The Pacific Northwest National Laboratory library of bacterial and archaeal proteomic biodiversity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Samuel H.; Monroe, Matthew E.; Overall, Christopher C.
This dataset deposition announces the submission to public repositories of the PNNL Biodiversity Library, a large collection of global proteomics data for 112 bacterial and archaeal organisms. The data comprises 35,162 tandem mass spectrometry (MS/MS) datasets from ~10 years of research. All data has been searched, annotated and organized in a consistent manner to promote reuse by the community. Protein identifications were cross-referenced with KEGG functional annotations which allows for pathway oriented investigation. We present the data as a freely available community resource. A variety of data re-use options are described for computational modeling, proteomics assay design and bioengineering. Instrumentmore » data and analysis files are available at ProteomeXchange via the MassIVE partner repository under the identifiers PXD001860 and MSV000079053.« less
Stadlmann, Johannes; Hoi, David M; Taubenschmid, Jasmin; Mechtler, Karl; Penninger, Josef M
2018-05-18
SugarQb (www.imba.oeaw.ac.at/sugarqb) is a freely available collection of computational tools for the automated identification of intact glycopeptides from high-resolution HCD MS/MS data-sets in the Proteome Discoverer environment. We report the migration of SugarQb to the latest and free version of Proteome Discoverer 2.1, and apply it to the analysis of PNGase F-resistant N-glycopeptides from mouse embryonic stem cells. The analysis of intact glycopeptides highlights unexpected technical limitations to PNGase F-dependent glycoproteomic workflows at the proteome level, and warrants a critical re-interpretation of seminal data-sets in the context of N-glycosylation-site prediction. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
MS-READ: Quantitative measurement of amino acid incorporation.
Mohler, Kyle; Aerni, Hans-Rudolf; Gassaway, Brandon; Ling, Jiqiang; Ibba, Michael; Rinehart, Jesse
2017-11-01
Ribosomal protein synthesis results in the genetically programmed incorporation of amino acids into a growing polypeptide chain. Faithful amino acid incorporation that accurately reflects the genetic code is critical to the structure and function of proteins as well as overall proteome integrity. Errors in protein synthesis are generally detrimental to cellular processes yet emerging evidence suggest that proteome diversity generated through mistranslation may be beneficial under certain conditions. Cumulative translational error rates have been determined at the organismal level, however codon specific error rates and the spectrum of misincorporation errors from system to system remain largely unexplored. In particular, until recently technical challenges have limited the ability to detect and quantify comparatively rare amino acid misincorporation events, which occur orders of magnitude less frequently than canonical amino acid incorporation events. We now describe a technique for the quantitative analysis of amino acid incorporation that provides the sensitivity necessary to detect mistranslation events during translation of a single codon at frequencies as low as 1 in 10,000 for all 20 proteinogenic amino acids, as well as non-proteinogenic and modified amino acids. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ortigueira, Manuel D.; Lopes, António M.; Machado, J. A. Tenreiro
2018-02-01
In the paper entitled "On the computation of the multidimensional Mittag-Leffler function" there is an error that originates some misleading nomenclature and results. The phrase in Section 2, page 2, three lines after equation (4), should be written as follows:
Letter to the Editor: Use of Publicly Available Image Resources
Armato, Samuel G.; Drukker, Karen; Li, Feng; ...
2017-05-11
Here we write with regard to the Academic Radiology article entitled, “Computer-aided Diagnosis for Lung Cancer: Usefulness of Nodule Heterogeneity” by Drs. Nishio and Nagashima (1). The authors also report on a computerized method to classify as benign or malignant lung nodules present in computed tomography (CT) scans.
SELDI PROTEINCHIP-BASED LIVER BIOMARKERS IN FUNGICIDE EXPOSED ZEBRAFISH
The research presented here is part of a three-phased small fish computational toxicology project using a combination of 1) whole organism endpoints, 2) genomic, proteomic, and metabolomic approaches, and 3) computational modeling to (a) identify new molecular biomarkers of expos...
The Proteome Folding Project: Proteome-scale prediction of structure and function
Drew, Kevin; Winters, Patrick; Butterfoss, Glenn L.; Berstis, Viktors; Uplinger, Keith; Armstrong, Jonathan; Riffle, Michael; Schweighofer, Erik; Bovermann, Bill; Goodlett, David R.; Davis, Trisha N.; Shasha, Dennis; Malmström, Lars; Bonneau, Richard
2011-01-01
The incompleteness of proteome structure and function annotation is a critical problem for biologists and, in particular, severely limits interpretation of high-throughput and next-generation experiments. We have developed a proteome annotation pipeline based on structure prediction, where function and structure annotations are generated using an integration of sequence comparison, fold recognition, and grid-computing-enabled de novo structure prediction. We predict protein domain boundaries and three-dimensional (3D) structures for protein domains from 94 genomes (including human, Arabidopsis, rice, mouse, fly, yeast, Escherichia coli, and worm). De novo structure predictions were distributed on a grid of more than 1.5 million CPUs worldwide (World Community Grid). We generated significant numbers of new confident fold annotations (9% of domains that are otherwise unannotated in these genomes). We demonstrate that predicted structures can be combined with annotations from the Gene Ontology database to predict new and more specific molecular functions. PMID:21824995
Unexpected features of the dark proteome
Perdigão, Nelson; Heinrich, Julian; Stolte, Christian; Sabir, Kenneth S.; Buckley, Michael J.; Tabor, Bruce; Signal, Beth; Gloss, Brian S.; Hammang, Christopher J.; Rost, Burkhard; Schafferhans, Andrea
2015-01-01
We surveyed the “dark” proteome–that is, regions of proteins never observed by experimental structure determination and inaccessible to homology modeling. For 546,000 Swiss-Prot proteins, we found that 44–54% of the proteome in eukaryotes and viruses was dark, compared with only ∼14% in archaea and bacteria. Surprisingly, most of the dark proteome could not be accounted for by conventional explanations, such as intrinsic disorder or transmembrane regions. Nearly half of the dark proteome comprised dark proteins, in which the entire sequence lacked similarity to any known structure. Dark proteins fulfill a wide variety of functions, but a subset showed distinct and largely unexpected features, such as association with secretion, specific tissues, the endoplasmic reticulum, disulfide bonding, and proteolytic cleavage. Dark proteins also had short sequence length, low evolutionary reuse, and few known interactions with other proteins. These results suggest new research directions in structural and computational biology. PMID:26578815
Marimuthu, Arivusudar; Chavan, Sandip; Sathe, Gajanan; Sahasrabuddhe, Nandini A; Srikanth, Srinivas M; Renuse, Santosh; Ahmad, Sartaj; Radhakrishnan, Aneesha; Barbhuiya, Mustafa A; Kumar, Rekha V; Harsha, H C; Sidransky, David; Califano, Joseph; Pandey, Akhilesh; Chatterjee, Aditi
2013-11-01
Protein biomarker discovery for early detection of head and neck squamous cell carcinoma (HNSCC) is a crucial unmet need to improve patient outcomes. Mass spectrometry-based proteomics has emerged as a promising tool for identification of biomarkers in different cancer types. Proteins secreted from cancer cells can serve as potential biomarkers for early diagnosis. In the current study, we have used isobaric tag for relative and absolute quantitation (iTRAQ) labeling methodology coupled with high resolution mass spectrometry to identify and quantitate secreted proteins from a panel of head and neck carcinoma cell lines. In all, we identified 2,472 proteins, of which 225 proteins were secreted at higher or lower abundance in HNSCC-derived cell lines. Of these, 148 were present in higher abundance and 77 were present in lower abundance in the cancer-cell derived secretome. We detected a higher abundance of some previously known markers for HNSCC including insulin like growth factor binding protein 3, IGFBP3 (11-fold) and opioid growth factor receptor, OGFR (10-fold) demonstrating the validity of our approach. We also identified several novel secreted proteins in HNSCC including olfactomedin-4, OLFM4 (12-fold) and hepatocyte growth factor activator, HGFA (5-fold). IHC-based validation was conducted in HNSCC using tissue microarrays which revealed overexpression of IGFBP3 and OLFM4 in 70% and 75% of the tested cases, respectively. Our study illustrates quantitative proteomics of secretome as a robust approach for identification of potential HNSCC biomarkers. This article is part of a Special Issue entitled: An Updated Secretome. Copyright © 2013 Elsevier B.V. All rights reserved.
17th Chromosome-Centric Human Proteome Project Symposium in Tehran.
Meyfour, Anna; Pahlavan, Sara; Sobhanian, Hamid; Salekdeh, Ghasem Hosseini
2018-04-01
This report describes the 17th Chromosome-Centric Human Proteome Project which was held in Tehran, Iran, April 27 and 28, 2017. A brief summary of the symposium's talks including new technical and computational approaches for the identification of novel proteins from non-coding genomic regions, physicochemical and biological causes of missing proteins, and the close interactions between Chromosome- and Biology/Disease-driven Human Proteome Project are presented. A synopsis of decisions made on the prospective programs to maintain collaborative works, share resources and information, and establishment of a newly organized working group, the task force for missing protein analysis are discussed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Managing expectations when publishing tools and methods for computational proteomics.
Martens, Lennart; Kohlbacher, Oliver; Weintraub, Susan T
2015-05-01
Computational tools are pivotal in proteomics because they are crucial for identification, quantification, and statistical assessment of data. The gateway to finding the best choice of a tool or approach for a particular problem is frequently journal articles, yet there is often an overwhelming variety of options that makes it hard to decide on the best solution. This is particularly difficult for nonexperts in bioinformatics. The maturity, reliability, and performance of tools can vary widely because publications may appear at different stages of development. A novel idea might merit early publication despite only offering proof-of-principle, while it may take years before a tool can be considered mature, and by that time it might be difficult for a new publication to be accepted because of a perceived lack of novelty. After discussions with members of the computational mass spectrometry community, we describe here proposed recommendations for organization of informatics manuscripts as a way to set the expectations of readers (and reviewers) through three different manuscript types that are based on existing journal designations. Brief Communications are short reports describing novel computational approaches where the implementation is not necessarily production-ready. Research Articles present both a novel idea and mature implementation that has been suitably benchmarked. Application Notes focus on a mature and tested tool or concept and need not be novel but should offer advancement from improved quality, ease of use, and/or implementation. Organizing computational proteomics contributions into these three manuscript types will facilitate the review process and will also enable readers to identify the maturity and applicability of the tool for their own workflows.
Jesupret, Clémence; Baumann, Kate; Jackson, Timothy N W; Ali, Syed Abid; Yang, Daryl C; Greisman, Laura; Kern, Larissa; Steuten, Jessica; Jouiaei, Mahdokht; Casewell, Nicholas R; Undheim, Eivind A B; Koludarov, Ivan; Debono, Jordan; Low, Dolyce H W; Rossi, Sarah; Panagides, Nadya; Winter, Kelly; Ignjatovic, Vera; Summerhayes, Robyn; Jones, Alun; Nouwens, Amanda; Dunstan, Nathan; Hodgson, Wayne C; Winkel, Kenneth D; Monagle, Paul; Fry, Bryan Grieg
2014-06-13
For over a century, venom samples from wild snakes have been collected and stored around the world. However, the quality of storage conditions for "vintage" venoms has rarely been assessed. The goal of this study was to determine whether such historical venom samples are still biochemically and pharmacologically viable for research purposes, or if new sample efforts are needed. In total, 52 samples spanning 5 genera and 13 species with regional variants of some species (e.g., 14 different populations of Notechis scutatus) were analysed by a combined proteomic and pharmacological approach to determine protein structural stability and bioactivity. When venoms were not exposed to air during storage, the proteomic results were virtually indistinguishable from that of fresh venom and bioactivity was equivalent or only slightly reduced. By contrast, a sample of Acanthophis antarcticus venom that was exposed to air (due to a loss of integrity of the rubber stopper) suffered significant degradation as evidenced by the proteomics profile. Interestingly, the neurotoxicity of this sample was nearly the same as fresh venom, indicating that degradation may have occurred in the free N- or C-terminus chains of the proteins, rather than at the tips of loops where the functional residues are located. These results suggest that these and other vintage venom collections may be of continuing value in toxin research. This is particularly important as many snake species worldwide are declining due to habitat destruction or modification. For some venoms (such as N. scutatus from Babel Island, Flinders Island, King Island and St. Francis Island) these were the first analyses ever conducted and these vintage samples may represent the only venom ever collected from these unique island forms of tiger snakes. Such vintage venoms may therefore represent the last remaining stocks of some local populations and thus are precious resources. These venoms also have significant historical value as the Oxyuranus venoms analysed include samples from the first coastal taipan (Oxyuranus scutellatus) collected for antivenom production (the snake that killed the collector Kevin Budden), as well as samples from the first Oxyuranus microlepidotus specimen collected after the species' rediscovery in 1976. These results demonstrate that with proper storage techniques, venom samples can retain structural and pharmacological stability. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2014 Elsevier B.V. All rights reserved.
Sma3s: A universal tool for easy functional annotation of proteomes and transcriptomes.
Casimiro-Soriguer, Carlos S; Muñoz-Mérida, Antonio; Pérez-Pulido, Antonio J
2017-06-01
The current cheapening of next-generation sequencing has led to an enormous growth in the number of sequenced genomes and transcriptomes, allowing wet labs to get the sequences from their organisms of study. To make the most of these data, one of the first things that should be done is the functional annotation of the protein-coding genes. But it used to be a slow and tedious step that can involve the characterization of thousands of sequences. Sma3s is an accurate computational tool for annotating proteins in an unattended way. Now, we have developed a completely new version, which includes functionalities that will be of utility for fundamental and applied science. Currently, the results provide functional categories such as biological processes, which become useful for both characterizing particular sequence datasets and comparing results from different projects. But one of the most important implemented innovations is that it has now low computational requirements, and the complete annotation of a simple proteome or transcriptome usually takes around 24 hours in a personal computer. Sma3s has been tested with a large amount of complete proteomes and transcriptomes, and it has demonstrated its potential in health science and other specific projects. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Medina-Aunon, J. Alberto; Martínez-Bartolomé, Salvador; López-García, Miguel A.; Salazar, Emilio; Navajas, Rosana; Jones, Andrew R.; Paradela, Alberto; Albar, Juan P.
2011-01-01
The development of the HUPO-PSI's (Proteomics Standards Initiative) standard data formats and MIAPE (Minimum Information About a Proteomics Experiment) guidelines should improve proteomics data sharing within the scientific community. Proteomics journals have encouraged the use of these standards and guidelines to improve the quality of experimental reporting and ease the evaluation and publication of manuscripts. However, there is an evident lack of bioinformatics tools specifically designed to create and edit standard file formats and reports, or embed them within proteomics workflows. In this article, we describe a new web-based software suite (The ProteoRed MIAPE web toolkit) that performs several complementary roles related to proteomic data standards. First, it can verify that the reports fulfill the minimum information requirements of the corresponding MIAPE modules, highlighting inconsistencies or missing information. Second, the toolkit can convert several XML-based data standards directly into human readable MIAPE reports stored within the ProteoRed MIAPE repository. Finally, it can also perform the reverse operation, allowing users to export from MIAPE reports into XML files for computational processing, data sharing, or public database submission. The toolkit is thus the first application capable of automatically linking the PSI's MIAPE modules with the corresponding XML data exchange standards, enabling bidirectional conversions. This toolkit is freely available at http://www.proteored.org/MIAPE/. PMID:21983993
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Requirements for company-wide management
NASA Technical Reports Server (NTRS)
Southall, J. W.
1980-01-01
Computing system requirements were developed for company-wide management of information and computer programs in an engineering data processing environment. The requirements are essential to the successful implementation of a computer-based engineering data management system; they exceed the capabilities provided by the commercially available data base management systems. These requirements were derived from a study entitled The Design Process, which was prepared by design engineers experienced in development of aerospace products.
Workshop in computational molecular biology, April 15, 1991--April 14, 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavare, S.
Funds from this award were used to the Workshop in Computational Molecular Biology, `91 Symposium entitled Interface: Computing Science and Statistics, Seattle, Washington, April 21, 1991; the Workshop in Statistical Issues in Molecular Biology held at Stanford, California, August 8, 1993; and the Session on Population Genetics a part of the 56th Annual Meeting, Institute of Mathematical Statistics, San Francisco, California, August 9, 1993.
ERIC Educational Resources Information Center
Barak, Miri; Harward, Judson; Kocur, George; Lerman, Steven
2007-01-01
Within the framework of MIT's course 1.00: Introduction to Computers and Engineering Problem Solving, this paper describes an innovative project entitled: "Studio 1.00" that integrates lectures with in-class demonstrations, active learning sessions, and on-task feedback, through the use of wireless laptop computers. This paper also describes a…
ERIC Educational Resources Information Center
Challe, Odile; And Others
1985-01-01
Describes a French project entitled "Lecticiel," jointly undertaken by specialists in reading, computer programing, and second language instruction to integrate these disciplines and provide assistance for students learning to read French as a foreign language. (MSE)
A Computational Algorithm for Functional Clustering of Proteome Dynamics During Development
Wang, Yaqun; Wang, Ningtao; Hao, Han; Guo, Yunqian; Zhen, Yan; Shi, Jisen; Wu, Rongling
2014-01-01
Phenotypic traits, such as seed development, are a consequence of complex biochemical interactions among genes, proteins and metabolites, but the underlying mechanisms that operate in a coordinated and sequential manner remain elusive. Here, we address this issue by developing a computational algorithm to monitor proteome changes during the course of trait development. The algorithm is built within the mixture-model framework in which each mixture component is modeled by a specific group of proteins that display a similar temporal pattern of expression in trait development. A nonparametric approach based on Legendre orthogonal polynomials was used to fit dynamic changes of protein expression, increasing the power and flexibility of protein clustering. By analyzing a dataset of proteomic dynamics during early embryogenesis of the Chinese fir, the algorithm has successfully identified several distinct types of proteins that coordinate with each other to determine seed development in this forest tree commercially and environmentally important to China. The algorithm will find its immediate applications for the characterization of mechanistic underpinnings for any other biological processes in which protein abundance plays a key role. PMID:24955031
Andromeda: a peptide search engine integrated into the MaxQuant environment.
Cox, Jürgen; Neuhauser, Nadin; Michalski, Annette; Scheltema, Richard A; Olsen, Jesper V; Mann, Matthias
2011-04-01
A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.
Micro computed tomography (CT) scanned anatomical gateway to insect pest bioinformatics
USDA-ARS?s Scientific Manuscript database
An international collaboration to establish an interactive Digital Video Library for a Systems Biology Approach to study the Asian citrus Psyllid and psyllid genomics/proteomics interactions is demonstrated. Advances in micro-CT, digital computed tomography (CT) scan uses X-rays to make detailed pic...
Integration of cardiac proteome biology and medicine by a specialized knowledgebase.
Zong, Nobel C; Li, Haomin; Li, Hua; Lam, Maggie P Y; Jimenez, Rafael C; Kim, Christina S; Deng, Ning; Kim, Allen K; Choi, Jeong Ho; Zelaya, Ivette; Liem, David; Meyer, David; Odeberg, Jacob; Fang, Caiyun; Lu, Hao-Jie; Xu, Tao; Weiss, James; Duan, Huilong; Uhlen, Mathias; Yates, John R; Apweiler, Rolf; Ge, Junbo; Hermjakob, Henning; Ping, Peipei
2013-10-12
Omics sciences enable a systems-level perspective in characterizing cardiovascular biology. Integration of diverse proteomics data via a computational strategy will catalyze the assembly of contextualized knowledge, foster discoveries through multidisciplinary investigations, and minimize unnecessary redundancy in research efforts. The goal of this project is to develop a consolidated cardiac proteome knowledgebase with novel bioinformatics pipeline and Web portals, thereby serving as a new resource to advance cardiovascular biology and medicine. We created Cardiac Organellar Protein Atlas Knowledgebase (COPaKB; www.HeartProteome.org), a centralized platform of high-quality cardiac proteomic data, bioinformatics tools, and relevant cardiovascular phenotypes. Currently, COPaKB features 8 organellar modules, comprising 4203 LC-MS/MS experiments from human, mouse, drosophila, and Caenorhabditis elegans, as well as expression images of 10,924 proteins in human myocardium. In addition, the Java-coded bioinformatics tools provided by COPaKB enable cardiovascular investigators in all disciplines to retrieve and analyze pertinent organellar protein properties of interest. COPaKB provides an innovative and interactive resource that connects research interests with the new biological discoveries in protein sciences. With an array of intuitive tools in this unified Web server, nonproteomics investigators can conveniently collaborate with proteomics specialists to dissect the molecular signatures of cardiovascular phenotypes.
Science and technology review, March 1997
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upadhye, R.
The articles in this month`s issue are entitled Site 300`s New Contained Firing Facility, Computational Electromagnetics: Codes and Capabilities, Ergonomics Research:Impact on Injuries, and The Linear Electric Motor: Instability at 1,000 g`s.
Patil, Ajeetkumar; Bhat, Sujatha; Pai, Keerthilatha M; Rai, Lavanya; Kartha, V B; Chidangil, Santhosh
2015-09-08
An ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique has been developed by our group at Manipal, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from volunteers (normal, and different pre-malignant/malignant conditions) were recorded using this set-up. The protein profiles were analyzed using principal component analysis (PCA) to achieve objective detection and classification of malignant, premalignant and healthy conditions with high sensitivity and specificity. The HPLC-LIF protein profiling combined with PCA, as a routine method for screening, diagnosis, and staging of cervical cancer and oral cancer, is discussed in this paper. In recent years, proteomics techniques have advanced tremendously in life sciences and medical sciences for the detection and identification of proteins in body fluids, tissue homogenates and cellular samples to understand biochemical mechanisms leading to different diseases. Some of the methods include techniques like high performance liquid chromatography, 2D-gel electrophoresis, MALDI-TOF-MS, SELDI-TOF-MS, CE-MS and LC-MS techniques. We have developed an ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from healthy and volunteers with different malignant conditions were recorded by using this set-up. The protein profile data were analyzed using principal component analysis (PCA) for objective classification and detection of malignant, premalignant and healthy conditions. The method is extremely sensitive to detect proteins with limit of detection of the order of femto-moles. The HPLC-LIF combined with PCA as a potential proteomic method for the diagnosis of oral cancer and cervical cancer has been discussed in this paper. This article is part of a Special Issue entitled: Proteomics in India. Copyright © 2015 Elsevier B.V. All rights reserved.
Liu, Mao-Sen; Li, Hui-Chun; Lai, Ying-Mi; Lo, Hsiao-Feng; Chen, Long-Fang O
2013-11-20
Previously, we investigated transgenic broccoli harboring senescence-associated-gene (SAG) promoter-triggered isopentenyltransferase (ipt), which encodes the key enzyme for cytokinin (CK) synthesis and mimics the action of exogenous supplied CK in delaying postharvest senescence of broccoli. Here, we used proteomics and transcriptomics to compare the mechanisms of ipt-transgenic and N(6)-benzylaminopurine (BA) CK treatment of broccoli during postharvest storage. The 2 treatments conferred common and distinct mechanisms. BA treatment decreased the quantity of proteins involved in energy and carbohydrate metabolism and amino acid metabolism, and ipt-transgenic treatment increased that of stress-related proteins and molecular chaperones and slightly affected levels of carbohydrate metabolism proteins. Both treatments regulated genes involved in CK signaling, sugar transport, energy and carbohydrate metabolism, amino acid metabolism and lipid metabolism, although ipt-transgenic treatment to a lesser extent. BA treatment induced genes encoding molecular chaperones, whereas ipt-transgenic treatment induced stress-related genes for cellular protection during storage. Both BA and ipt-transgenic treatments acted antagonistically on ethylene functions. We propose a long-term acclimation of metabolism and protection systems with ipt-transgenic treatment of broccoli and short-term modulation of metabolism and establishment of a protection system with both BA and ipt-transgenic treatments in delaying senescence of broccoli florets. Transgenic broccoli harboring senescence-associated-gene (SAG) promoter-triggered isopentenyltransferase (ipt), which encodes the key enzyme for cytokinin (CK) synthesis and N(6)-benzylaminopurine (BA) CK treated broccoli both showed retardation of postharvest senescence during storage. The mechanisms underlying the two treatments were compared. The combination of proteomic and transcriptomic evidences revealed that the 2 treatments conferred common and distinct mechanisms in delaying senescence of broccoli florets. We propose a long-term acclimation of metabolism and protection systems with ipt-transgenic treatment of broccoli and short-term modulation of metabolism and establishment of a protection system with both BA and ipt-transgenic treatments in delaying senescence of broccoli florets. This article is part of a Special Issue entitled: Translational Plant Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry ( Cromwell ) and 2D gel electrophoresis ( Pinnacle ) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods presented are applied to two specific proteomic technologies, MALDI-TOF and 2D gel electrophoresis, these methods and the other principles discussed in the paper apply much more broadly to other expression proteomics technologies.
Thermosensitivity of growth is determined by chaperone-mediated proteome reallocation
Chen, Ke; Gao, Ye; Mih, Nathan; O’Brien, Edward J.; Yang, Laurence; Palsson, Bernhard O.
2017-01-01
Maintenance of a properly folded proteome is critical for bacterial survival at notably different growth temperatures. Understanding the molecular basis of thermoadaptation has progressed in two main directions, the sequence and structural basis of protein thermostability and the mechanistic principles of protein quality control assisted by chaperones. Yet we do not fully understand how structural integrity of the entire proteome is maintained under stress and how it affects cellular fitness. To address this challenge, we reconstruct a genome-scale protein-folding network for Escherichia coli and formulate a computational model, FoldME, that provides statistical descriptions of multiscale cellular response consistent with many datasets. FoldME simulations show (i) that the chaperones act as a system when they respond to unfolding stress rather than achieving efficient folding of any single component of the proteome, (ii) how the proteome is globally balanced between chaperones for folding and the complex machinery synthesizing the proteins in response to perturbation, (iii) how this balancing determines growth rate dependence on temperature and is achieved through nonspecific regulation, and (iv) how thermal instability of the individual protein affects the overall functional state of the proteome. Overall, these results expand our view of cellular regulation, from targeted specific control mechanisms to global regulation through a web of nonspecific competing interactions that modulate the optimal reallocation of cellular resources. The methodology developed in this study enables genome-scale integration of environment-dependent protein properties and a proteome-wide study of cellular stress responses. PMID:29073085
2012-01-01
Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969
Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T
2012-12-08
Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.
DOT National Transportation Integrated Search
1975-01-01
It was found that the coordinates of the highways required for Noise 1 could be supplied on punched cards by the Photogrammetry Section of the Department. In preparing data for contour plotting, it was found advisable to divide the area into sectors,...
Mass spectrometry-based proteomics: basic principles and emerging technologies and directions.
Van Riper, Susan K; de Jong, Ebbing P; Carlis, John V; Griffin, Timothy J
2013-01-01
As the main catalytic and structural molecules within living systems, proteins are the most likely biomolecules to be affected by radiation exposure. Proteomics, the comprehensive characterization of proteins within complex biological samples, is therefore a research approach ideally suited to assess the effects of radiation exposure on cells and tissues. For comprehensive characterization of proteomes, an analytical platform capable of quantifying protein abundance, identifying post-translation modifications and revealing members of protein complexes on a system-wide level is necessary. Mass spectrometry (MS), coupled with technologies for sample fractionation and automated data analysis, provides such a versatile and powerful platform. In this chapter we offer a view on the current state of MS-proteomics, and focus on emerging technologies within three areas: (1) New instrumental methods; (2) New computational methods for peptide identification; and (3) Label-free quantification. These emerging technologies should be valuable for researchers seeking to better understand biological effects of radiation on living systems.
Hiller, Karsten; Grote, Andreas; Maneck, Matthias; Münch, Richard; Jahn, Dieter
2006-10-01
After the publication of JVirGel 1.0 in 2003 we got many requests and suggestions from the proteomics community to further improve the performance of the software and to add additional useful new features. The integration of the PrediSi algorithm for the prediction of signal peptides for the Sec-dependent protein export into JVirGel 2.0 allows the exclusion of most exported preproteins from calculated proteomic maps and provides the basis for the calculation of Sec-based secretomes. A tool for the identification of transmembrane helices carrying proteins (JCaMelix) and the prediction of the corresponding membrane proteome was added. Finally, in order to directly compare experimental and calculated proteome data, a function to overlay and evaluate predicted and experimental two-dimensional gels was included. JVirGel 2.0 is freely available as precompiled package for the installation on Windows or Linux operating systems. Furthermore, there is a completely platform-independent Java version available for download. Additionally, we provide a Java Server Pages based version of JVirGel 2.0 which can be operated in nearly all web browsers. All versions are accessible at http://www.jvirgel.de
Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.; ...
2015-04-09
In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less
Reales-Calderón, Jose Antonio; Corona, Fernando; Monteoliva, Lucía; Gil, Concha; Martínez, Jose Luis
2015-09-08
Recent research indicates that the post-transcriptional regulator Crc modulates susceptibility to antibiotics and virulence in Pseudomonas aeruginosa. Several P. aeruginosa virulence factors are secreted or engulfed in vesicles. To decipher the Crc modulation of P. aeruginosa virulence, we constructed a crc deficient mutant and measure the proteome associated extracellular vesicles and the vesicle-free secretome using iTRAQ. Fifty vesicle-associated proteins were more abundant and 14 less abundant in the crc-defective strain, whereas 37 were more abundant and 17 less abundant in the vesicle-free secretome. Among them, virulence determinants, such as ToxA, protease IV, azurin, chitin-binding protein, PlcB and Hcp1, were less abundant in the crc-defective mutant. Transcriptomic analysis revealed that some of the observed changes were post-transcriptional and, thus, could be attributed to a direct Crc regulatory role; whereas, for other differentially secreted proteins, the regulatory role was likely indirect. We also observed that the crc mutant presented an impaired vesicle-associated secretion of quorum sensing signal molecules and less cytotoxicity than its wild-type strain. Our results offer new insights into the mechanisms by which Crc regulates P. aeruginosa virulence, through the modulation of vesicle formation and secretion of both virulence determinants and quorum sensing signals. This article is part of a Special Issue entitled: HUPO 2014. Published by Elsevier B.V.
Protein biomarker validation via proximity ligation assays.
Blokzijl, A; Nong, R; Darmanis, S; Hertz, E; Landegren, U; Kamali-Moghaddam, M
2014-05-01
The ability to detect minute amounts of specific proteins or protein modifications in blood as biomarkers for a plethora of human pathological conditions holds great promise for future medicine. Despite a large number of plausible candidate protein biomarkers published annually, the translation to clinical use is impeded by factors such as the required size of the initial studies, and limitations of the technologies used. The proximity ligation assay (PLA) is a versatile molecular tool that has the potential to address some obstacles, both in validation of biomarkers previously discovered using other techniques, and for future routine clinical diagnostic needs. The enhanced specificity of PLA extends the opportunities for large-scale, high-performance analyses of proteins. Besides advantages in the form of minimal sample consumption and an extended dynamic range, the PLA technique allows flexible assay reconfiguration. The technology can be adapted for detecting protein complexes, proximity between proteins in extracellular vesicles or in circulating tumor cells, and to address multiple post-translational modifications in the same protein molecule. We discuss herein requirements for biomarker validation, and how PLA may play an increasing role in this regard. We describe some recent developments of the technology, including proximity extension assays, the use of recombinant affinity reagents suitable for use in proximity assays, and the potential for single cell proteomics. This article is part of a Special Issue entitled: Biomarkers: A Proteomic Challenge. © 2013.
Rewiring protein synthesis: From natural to synthetic amino acids.
Fan, Yongqiang; Evans, Christopher R; Ling, Jiqiang
2017-11-01
The protein synthesis machinery uses 22 natural amino acids as building blocks that faithfully decode the genetic information. Such fidelity is controlled at multiple steps and can be compromised in nature and in the laboratory to rewire protein synthesis with natural and synthetic amino acids. This review summarizes the major quality control mechanisms during protein synthesis, including aminoacyl-tRNA synthetases, elongation factors, and the ribosome. We will discuss evolution and engineering of such components that allow incorporation of natural and synthetic amino acids at positions that deviate from the standard genetic code. The protein synthesis machinery is highly selective, yet not fixed, for the correct amino acids that match the mRNA codons. Ambiguous translation of a codon with multiple amino acids or complete reassignment of a codon with a synthetic amino acid diversifies the proteome. Expanding the genetic code with synthetic amino acids through rewiring protein synthesis has broad applications in synthetic biology and chemical biology. Biochemical, structural, and genetic studies of the translational quality control mechanisms are not only crucial to understand the physiological role of translational fidelity and evolution of the genetic code, but also enable us to better design biological parts to expand the proteomes of synthetic organisms. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.
Ethics and the 7 `P`s` of computer use policies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, T.J.; Voss, R.B.
1994-12-31
A Computer Use Policy (CUP) defines who can use the computer facilities for what. The CUP is the institution`s official position on the ethical use of computer facilities. The authors believe that writing a CUP provides an ideal platform to develop a group ethic for computer users. In prior research, the authors have developed a seven phase model for writing CUPs, entitled the 7 P`s of Computer Use Policies. The purpose of this paper is to present the model and discuss how the 7 P`s can be used to identify and communicate a group ethic for the institution`s computer users.
An automated method for detecting alternatively spliced protein domains.
Coelho, Vitor; Sammeth, Michael
2018-06-01
Alternative splicing (AS) has been demonstrated to play a role in shaping eukaryotic gene diversity at the transcriptional level. However, the impact of AS on the proteome is still controversial. Studies that seek to explore the effect of AS at the proteomic level are hampered by technical difficulties in the cumbersome process of casting forth and back between genome, transcriptome and proteome space coordinates, and the naïve prediction of protein domains in the presence of AS suffers many redundant sequence scans that emerge from constitutively spliced regions that are shared between alternative products of a gene. We developed the AstaFunk pipeline that computes for every generic transcriptome all domains that are altered by AS events in a systematic and efficient manner. In a nutshell, our method employs Viterbi dynamic programming, which guarantees to find all score-optimal hits of the domains under consideration, while complementary optimisations at different levels avoid redundant and other irrelevant computations. We evaluate AstaFunk qualitatively and quantitatively using RNAseq in well-studied genes with AS, and on large-scale employing entire transcriptomes. Our study confirms complementary reports that the effect of most AS events on the proteome seems to be rather limited, but our results also pinpoint several cases where AS could have a major impact on the function of a protein domain. The JAVA implementation of AstaFunk is available as an open source project on http://astafunk.sammeth.net. micha@sammeth.net. Supplementary data are available at Bioinformatics online.
2011-01-01
Background Since its inception, proteomics has essentially operated in a discovery mode with the goal of identifying and quantifying the maximal number of proteins in a sample. Increasingly, proteomic measurements are also supporting hypothesis-driven studies, in which a predetermined set of proteins is consistently detected and quantified in multiple samples. Selected reaction monitoring (SRM) is a targeted mass spectrometric technique that supports the detection and quantification of specific proteins in complex samples at high sensitivity and reproducibility. Here, we describe ATAQS, an integrated software platform that supports all stages of targeted, SRM-based proteomics experiments including target selection, transition optimization and post acquisition data analysis. This software will significantly facilitate the use of targeted proteomic techniques and contribute to the generation of highly sensitive, reproducible and complete datasets that are particularly critical for the discovery and validation of targets in hypothesis-driven studies in systems biology. Result We introduce a new open source software pipeline, ATAQS (Automated and Targeted Analysis with Quantitative SRM), which consists of a number of modules that collectively support the SRM assay development workflow for targeted proteomic experiments (project management and generation of protein, peptide and transitions and the validation of peptide detection by SRM). ATAQS provides a flexible pipeline for end-users by allowing the workflow to start or end at any point of the pipeline, and for computational biologists, by enabling the easy extension of java algorithm classes for their own algorithm plug-in or connection via an external web site. This integrated system supports all steps in a SRM-based experiment and provides a user-friendly GUI that can be run by any operating system that allows the installation of the Mozilla Firefox web browser. Conclusions Targeted proteomics via SRM is a powerful new technique that enables the reproducible and accurate identification and quantification of sets of proteins of interest. ATAQS is the first open-source software that supports all steps of the targeted proteomics workflow. ATAQS also provides software API (Application Program Interface) documentation that enables the addition of new algorithms to each of the workflow steps. The software, installation guide and sample dataset can be found in http://tools.proteomecenter.org/ATAQS/ATAQS.html PMID:21414234
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
Case Study Evaluation of the Boston Area Carpooling Program
DOT National Transportation Integrated Search
1976-05-01
The report evaluates a carpooling program in operation in the Boston, Massachusetts area from August, 1973 through August, 1974. The program, entitled the WBZ/ALA Commuter Computer Campaign, was the first program in the nation to promote and organize...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-25
... pension payment data from its system of records (SOR) entitled the ``Compensation, Pension, Education, and... monthly. The actual match will take place approximately during the first week of every month. E. Inclusive...
Sidhu, Manrita K; Goske, Marilyn J; Coley, Brian J; Connolly, Bairbre; Racadio, John; Yoshizumi, Terry T; Utley, Tara; Strauss, Keith J
2009-09-01
In the past several decades, advances in imaging and interventional techniques have been accompanied by an increase in medical radiation dose to the public. Radiation exposure is even more important in children, who are more sensitive to radiation and have a longer lifespan during which effects may manifest. To address radiation safety in pediatric computed tomography, in 2008 the Alliance for Radiation Safety in Pediatric Imaging launched an international social marketing campaign entitled Image Gently. This article describes the next phase of the Image Gently campaign, entitled Step Lightly, which focuses on radiation safety in pediatric interventional radiology.
Dataset of the Botrytis cinerea phosphoproteome induced by different plant-based elicitors.
Liñeiro, Eva; Chiva, Cristina; Cantoral, Jesús M; Sabido, Eduard; Fernández-Acero, Francisco Javier
2016-06-01
Phosphorylation is one of the main post-translational modification (PTM) involved in signaling network in the ascomycete Botrytis cinerea , one of the most relevant phytopathogenic fungus. The data presented in this article provided a differential mass spectrometry-based analysis of the phosphoproteome of B. cinerea under two different phenotypical conditions induced by the use of two different elicitors: glucose and deproteinized Tomate Cell Walls (TCW). A total 1138 and 733 phosphoproteins were identified for glucose and TCW culture conditions respectively. Raw data are deposited at the ProteomeXchange Consortium via the PRIDE partner repository with the data set identifier (PRIDE: http://www.ebi.ac.uk/pride/archive/projects/PXD003099). Further interpretation and discussion of these data are provided in our research article entitled "Phosphoproteome analysis of B.cinerea in response to different plant-based elicitors" (Liñeiro et al., 2016) [1].
Anticoagulants and the propagation phase of thrombin generation.
Orfeo, Thomas; Gissel, Matthew; Butenas, Saulius; Undas, Anetta; Brummel-Ziedins, Kathleen E; Mann, Kenneth G
2011-01-01
The view that clot time-based assays do not provide a sufficient assessment of an individual's hemostatic competence, especially in the context of anticoagulant therapy, has provoked a search for new metrics, with significant focus directed at techniques that define the propagation phase of thrombin generation. Here we use our deterministic mathematical model of tissue-factor initiated thrombin generation in combination with reconstructions using purified protein components to characterize how the interplay between anticoagulant mechanisms and variable composition of the coagulation proteome result in differential regulation of the propagation phase of thrombin generation. Thrombin parameters were extracted from computationally derived thrombin generation profiles generated using coagulation proteome factor data from warfarin-treated individuals (N = 54) and matching groups of control individuals (N = 37). A computational clot time prolongation value (cINR) was devised that correlated with their actual International Normalized Ratio (INR) values, with differences between individual INR and cINR values shown to derive from the insensitivity of the INR to tissue factor pathway inhibitor (TFPI). The analysis suggests that normal range variation in TFPI levels could be an important contributor to the failure of the INR to adequately reflect the anticoagulated state in some individuals. Warfarin-induced changes in thrombin propagation phase parameters were then compared to those induced by unfractionated heparin, fondaparinux, rivaroxaban, and a reversible thrombin inhibitor. Anticoagulants were assessed at concentrations yielding equivalent cINR values, with each anticoagulant evaluated using 32 unique coagulation proteome compositions. The analyses showed that no anticoagulant recapitulated all features of warfarin propagation phase dynamics; differences in propagation phase effects suggest that anticoagulants that selectively target fXa or thrombin may provoke fewer bleeding episodes. More generally, the study shows that computational modeling of the response of core elements of the coagulation proteome to a physiologically relevant tissue factor stimulus may improve the monitoring of a broad range of anticoagulants.
Lomonte, Bruno; Fernández, Julián; Sanz, Libia; Angulo, Yamileth; Sasa, Mahmood; Gutiérrez, José María; Calvete, Juan J
2014-06-13
In spite of its small territory of ~50,000km(2), Costa Rica harbors a remarkably rich biodiversity. Its herpetofauna includes 138 species of snakes, of which sixteen pit vipers (family Viperidae, subfamily Crotalinae), five coral snakes (family Elapidae, subfamily Elapinae), and one sea snake (Family Elapidae, subfamily Hydrophiinae) pose potential hazards to human and animal health. In recent years, knowledge on the composition of snake venoms has expanded dramatically thanks to the development of increasingly fast and sensitive analytical techniques in mass spectrometry and separation science applied to protein characterization. Among several analytical strategies to determine the overall protein/peptide composition of snake venoms, the methodology known as 'snake venomics' has proven particularly well suited and informative, by providing not only a catalog of protein types/families present in a venom, but also a semi-quantitative estimation of their relative abundances. Through a collaborative research initiative between Instituto de Biomedicina de Valencia (IBV) and Instituto Clodomiro Picado (ICP), this strategy has been applied to the study of venoms of Costa Rican snakes, aiming to obtain a deeper knowledge on their composition, geographic and ontogenic variations, relationships to taxonomy, correlation with toxic activities, and discovery of novel components. The proteomic profiles of venoms from sixteen out of the 22 species within the Viperidae and Elapidae families found in Costa Rica have been reported so far, and an integrative view of these studies is hereby presented. In line with other venomic projects by research groups focusing on a wide variety of snakes around the world, these studies contribute to a deeper understanding of the biochemical basis for the diverse toxic profiles evolved by venomous snakes. In addition, these studies provide opportunities to identify novel molecules of potential pharmacological interest. Furthermore, the establishment of venom proteomic profiles offers a fundamental platform to assess the detailed immunorecognition of individual proteins/peptides by therapeutic or experimental antivenoms, an evolving methodology for which the term 'antivenomics' was coined (as described in an accompanying paper in this special issue). Venoms represent an adaptive trait and an example of both divergent and convergent evolution. A deep understanding of the composition of venoms and of the principles governing the evolution of venomous systems is of applied importance for exploring the enormous potential of venoms as sources of chemical and pharmacological novelty but also to fight the consequences of snakebite envenomings. Key to this is the identification of evolutionary and ecological trends at different taxonomical levels. However, the evolution of venomous species and their venoms do not always follow the same course, and the identification of structural and functional convergences and divergences among venoms is often unpredictable by a phylogenetic hypothesis. Snake venomics is a proteomic-centered strategy to deconstruct the complex molecular phenotypes the venom proteomes. The proteomic profiles of venoms from sixteen out of the 22 venomous species within the Viperidae and Elapidae families found in Costa Rica have been completed so far. An integrative view of their venom composition, including the identification of geographic and ontogenic variations, is hereby presented. Venom proteomic profiles offer a fundamental platform to assess the detailed immunorecognition of individual venom components by therapeutic or experimental antivenoms. This aspect is reviewed in the companion paper. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2014 Elsevier B.V. All rights reserved.
An on-line system for hand-printed input
NASA Technical Reports Server (NTRS)
Williams, T. G.; Bebb, J.
1971-01-01
The capability of graphic input/output systems is described. Topics considered are a character recognizer and dictionary building program, an initial flow chart element input program, and a system entitled The Assistant Mathematician, which uses ordinary mathematics to specify numeric computation. All three parts are necessary to allow a user to carry on a mathematical dialogue with the computer in the language and notation of his discipline or problem domain.
PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages
Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi
2017-01-01
Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072
Cambiaghi, Alice; Díaz, Ramón; Martinez, Julia Bauzá; Odena, Antonia; Brunelli, Laura; Caironi, Pietro; Masson, Serge; Baselli, Giuseppe; Ristagno, Giuseppe; Gattinoni, Luciano; de Oliveira, Eliandre; Pastorelli, Roberta; Ferrario, Manuela
2018-04-27
In this work, we examined plasma metabolome, proteome and clinical features in patients with severe septic shock enrolled in the multicenter ALBIOS study. The objective was to identify changes in the levels of metabolites involved in septic shock progression and to integrate this information with the variation occurring in proteins and clinical data. Mass spectrometry-based targeted metabolomics and untargeted proteomics allowed us to quantify absolute metabolites concentration and relative proteins abundance. We computed the ratio D7/D1 to take into account their variation from day 1 (D1) to day 7 (D7) after shock diagnosis. Patients were divided into two groups according to 28-day mortality. Three different elastic net logistic regression models were built: one on metabolites only, one on metabolites and proteins and one to integrate metabolomics and proteomics data with clinical parameters. Linear discriminant analysis and Partial least squares Discriminant Analysis were also implemented. All the obtained models correctly classified the observations in the testing set. By looking at the variable importance (VIP) and the selected features, the integration of metabolomics with proteomics data showed the importance of circulating lipids and coagulation cascade in septic shock progression, thus capturing a further layer of biological information complementary to metabolomics information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stekhoven, Daniel J.; Omasits, Ulrich; Quebatte, Maxime
2014-03-01
Proteomics data provide unique insights into biological systems, including the predominant subcellular localization (SCL) of proteins, which can reveal important clues about their functions. Here we analyzed data of a complete prokaryotic proteome expressed under two conditions mimicking interaction of the emerging pathogen Bartonella henselae with its mammalian host. Normalized spectral count data from cytoplasmic, total membrane, inner and outer membrane fractions allowed us to identify the predominant SCL for 82% of the identified proteins. The spectral count proportion of total membrane versus cytoplasmic fractions indicated the propensity of cytoplasmic proteins to co-fractionate with the inner membrane, and enabled usmore » to distinguish cytoplasmic, peripheral innermembrane and bona fide inner membrane proteins. Principal component analysis and k-nearest neighbor classification training on selected marker proteins or predominantly localized proteins, allowed us to determine an extensive catalog of at least 74 expressed outer membrane proteins, and to extend the SCL assignment to 94% of the identified proteins, including 18% where in silico methods gave no prediction. Suitable experimental proteomics data combined with straightforward computational approaches can thus identify the predominant SCL on a proteome-wide scale. Finally, we present a conceptual approach to identify proteins potentially changing their SCL in a condition-dependent fashion.« less
ERIC Educational Resources Information Center
McGrath, Diane, Ed.
1989-01-01
Reviewed is a computer software package entitled "Audubon Wildlife Adventures: Grizzly Bears" for Apple II and IBM microcomputers. Included are availability, hardware requirements, cost, and a description of the program. The murder-mystery flavor of the program is stressed in this program that focuses on illegal hunting and game…
20 CFR 404.282 - Effective date of recomputations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Section 404.282 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Recomputing Your Primary Insurance Amount... Social Security benefit amount is effective for the first month you are entitled to the pension. Finally...
Code of Federal Regulations, 2014 CFR
2014-10-01
... system means the hardware, operational software, applications software and electronic linkages in an... further defined in the OCSE guideline entitled “Automated Systems for Child Support Enforcement: A Guide... training, testing, and conversion plans to install the computer system. (j) The following terms are defined...
Code of Federal Regulations, 2011 CFR
2011-10-01
... system means the hardware, operational software, applications software and electronic linkages in an... further defined in the OCSE guideline entitled “Automated Systems for Child Support Enforcement: A Guide... training, testing, and conversion plans to install the computer system. (j) The following terms are defined...
Code of Federal Regulations, 2012 CFR
2012-10-01
... system means the hardware, operational software, applications software and electronic linkages in an... further defined in the OCSE guideline entitled “Automated Systems for Child Support Enforcement: A Guide... training, testing, and conversion plans to install the computer system. (j) The following terms are defined...
Code of Federal Regulations, 2013 CFR
2013-10-01
... system means the hardware, operational software, applications software and electronic linkages in an... further defined in the OCSE guideline entitled “Automated Systems for Child Support Enforcement: A Guide... training, testing, and conversion plans to install the computer system. (j) The following terms are defined...
Code of Federal Regulations, 2010 CFR
2010-10-01
... system means the hardware, operational software, applications software and electronic linkages in an... further defined in the OCSE guideline entitled “Automated Systems for Child Support Enforcement: A Guide... training, testing, and conversion plans to install the computer system. (j) The following terms are defined...
ERIC Educational Resources Information Center
Agaoglu, Onur
2014-01-01
It is crucial that gifted and talented students should be supported by different educational methods for their interests and skills. The science and arts centres (gifted centres) provide the Supportive Education Program for these students with an interdisciplinary perspective. In line with the program, an ICT lesson entitled "Computer…
Proteomics and Systems Biology: Current and Future Applications in the Nutritional Sciences1
Moore, J. Bernadette; Weeks, Mark E.
2011-01-01
In the last decade, advances in genomics, proteomics, and metabolomics have yielded large-scale datasets that have driven an interest in global analyses, with the objective of understanding biological systems as a whole. Systems biology integrates computational modeling and experimental biology to predict and characterize the dynamic properties of biological systems, which are viewed as complex signaling networks. Whereas the systems analysis of disease-perturbed networks holds promise for identification of drug targets for therapy, equally the identified critical network nodes may be targeted through nutritional intervention in either a preventative or therapeutic fashion. As such, in the context of the nutritional sciences, it is envisioned that systems analysis of normal and nutrient-perturbed signaling networks in combination with knowledge of underlying genetic polymorphisms will lead to a future in which the health of individuals will be improved through predictive and preventative nutrition. Although high-throughput transcriptomic microarray data were initially most readily available and amenable to systems analysis, recent technological and methodological advances in MS have contributed to a linear increase in proteomic investigations. It is now commonplace for combined proteomic technologies to generate complex, multi-faceted datasets, and these will be the keystone of future systems biology research. This review will define systems biology, outline current proteomic methodologies, highlight successful applications of proteomics in nutrition research, and discuss the challenges for future applications of systems biology approaches in the nutritional sciences. PMID:22332076
GO Explorer: A gene-ontology tool to aid in the interpretation of shotgun proteomics data.
Carvalho, Paulo C; Fischer, Juliana Sg; Chen, Emily I; Domont, Gilberto B; Carvalho, Maria Gc; Degrave, Wim M; Yates, John R; Barbosa, Valmir C
2009-02-24
Spectral counting is a shotgun proteomics approach comprising the identification and relative quantitation of thousands of proteins in complex mixtures. However, this strategy generates bewildering amounts of data whose biological interpretation is a challenge. Here we present a new algorithm, termed GO Explorer (GOEx), that leverages the gene ontology (GO) to aid in the interpretation of proteomic data. GOEx stands out because it combines data from protein fold changes with GO over-representation statistics to help draw conclusions. Moreover, it is tightly integrated within the PatternLab for Proteomics project and, thus, lies within a complete computational environment that provides parsers and pattern recognition tools designed for spectral counting. GOEx offers three independent methods to query data: an interactive directed acyclic graph, a specialist mode where key words can be searched, and an automatic search. Its usefulness is demonstrated by applying it to help interpret the effects of perillyl alcohol, a natural chemotherapeutic agent, on glioblastoma multiform cell lines (A172). We used a new multi-surfactant shotgun proteomic strategy and identified more than 2600 proteins; GOEx pinpointed key sets of differentially expressed proteins related to cell cycle, alcohol catabolism, the Ras pathway, apoptosis, and stress response, to name a few. GOEx facilitates organism-specific studies by leveraging GO and providing a rich graphical user interface. It is a simple to use tool, specialized for biologists who wish to analyze spectral counting data from shotgun proteomics. GOEx is available at http://pcarvalho.com/patternlab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Wiberg, Holli K.; Matzke, Melissa M.
In this review, we apply selected imputation strategies to label-free liquid chromatography–mass spectrometry (LC–MS) proteomics datasets to evaluate the accuracy with respect to metrics of variance and classification. We evaluate several commonly used imputation approaches for individual merits and discuss the caveats of each approach with respect to the example LC–MS proteomics data. In general, local similarity-based approaches, such as the regularized expectation maximization and least-squares adaptive algorithms, yield the best overall performances with respect to metrics of accuracy and robustness. However, no single algorithm consistently outperforms the remaining approaches, and in some cases, performing classification without imputation sometimes yieldedmore » the most accurate classification. Thus, because of the complex mechanisms of missing data in proteomics, which also vary from peptide to protein, no individual method is a single solution for imputation. In summary, on the basis of the observations in this review, the goal for imputation in the field of computational proteomics should be to develop new approaches that work generically for this data type and new strategies to guide users in the selection of the best imputation for their dataset and analysis objectives.« less
Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
2008-04-01
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
MASH Suite Pro: A Comprehensive Software Tool for Top-Down Proteomics*
Cai, Wenxuan; Guner, Huseyin; Gregorich, Zachery R.; Chen, Albert J.; Ayaz-Guner, Serife; Peng, Ying; Valeja, Santosh G.; Liu, Xiaowen; Ge, Ying
2016-01-01
Top-down mass spectrometry (MS)-based proteomics is arguably a disruptive technology for the comprehensive analysis of all proteoforms arising from genetic variation, alternative splicing, and posttranslational modifications (PTMs). However, the complexity of top-down high-resolution mass spectra presents a significant challenge for data analysis. In contrast to the well-developed software packages available for data analysis in bottom-up proteomics, the data analysis tools in top-down proteomics remain underdeveloped. Moreover, despite recent efforts to develop algorithms and tools for the deconvolution of top-down high-resolution mass spectra and the identification of proteins from complex mixtures, a multifunctional software platform, which allows for the identification, quantitation, and characterization of proteoforms with visual validation, is still lacking. Herein, we have developed MASH Suite Pro, a comprehensive software tool for top-down proteomics with multifaceted functionality. MASH Suite Pro is capable of processing high-resolution MS and tandem MS (MS/MS) data using two deconvolution algorithms to optimize protein identification results. In addition, MASH Suite Pro allows for the characterization of PTMs and sequence variations, as well as the relative quantitation of multiple proteoforms in different experimental conditions. The program also provides visualization components for validation and correction of the computational outputs. Furthermore, MASH Suite Pro facilitates data reporting and presentation via direct output of the graphics. Thus, MASH Suite Pro significantly simplifies and speeds up the interpretation of high-resolution top-down proteomics data by integrating tools for protein identification, quantitation, characterization, and visual validation into a customizable and user-friendly interface. We envision that MASH Suite Pro will play an integral role in advancing the burgeoning field of top-down proteomics. PMID:26598644
Uddin, Reaz; Jamil, Faiza
2018-06-01
Pseudomonas aeruginosa is an opportunistic gram-negative bacterium that has the capability to acquire resistance under hostile conditions and become a threat worldwide. It is involved in nosocomial infections. In the current study, potential novel drug targets against P. aeruginosa have been identified using core proteomic analysis and Protein-Protein Interactions (PPIs) studies. The non-redundant reference proteome of 68 strains having complete genome and latest assembly version of P. aeruginosa were downloaded from ftp NCBI RefSeq server in October 2016. The standalone CD-HIT tool was used to cluster ortholog proteins (having >=80% amino acid identity) present in all strains. The pan-proteome was clustered in 12,380 Clusters of Orthologous Proteins (COPs). By using in-house shell scripts, 3252 common COPs were extracted out and designated as clusters of core proteome. The core proteome of PAO1 strain was selected by fetching PAO1's proteome from common COPs. As a result, 1212 proteins were shortlisted that are non-homologous to the human but essential for the survival of the pathogen. Among these 1212 proteins, 321 proteins are conserved hypothetical proteins. Considering their potential as drug target, those 321 hypothetical proteins were selected and their probable functions were characterized. Based on the druggability criteria, 18 proteins were shortlisted. The interacting partners were identified by investigating the PPIs network using STRING v10 database. Subsequently, 8 proteins were shortlisted as 'hub proteins' and proposed as potential novel drug targets against P. aeruginosa. The study is interesting for the scientific community working to identify novel drug targets against MDR pathogens particularly P. aeruginosa. Copyright © 2018 Elsevier Ltd. All rights reserved.
Teaching Clinical Neurology with the PLATO IV Computer System
ERIC Educational Resources Information Center
Parker, Alan; Trynda, Richard
1975-01-01
A "Neurox" program entitled "Canine Neurological Diagnosis" developed at the University of Illinois College of Veterinary Medicine enables a student to obtain the results of 78 possible neurological tests or associated questions on a single case. A lesson and possible adaptations are described. (LBH)
NATURAL BIOATTENUATION OF TRICHLOROETHENE AT THE ST. JOSEPH, MICHIGAN SUPERFUND SITE
Data from the St. Joseph, Michigan, Superfund Site were used in a peer-reviewed video entitled "Natural Bioattenuation of Trichloroethene at the St. Joseph, Michigan Superfund Site." Computer visualizations of the data set show how trichloroethene, or TCE, can degrade under natu...
Behrens, T; Bonberg, N; Casjens, S; Pesch, B; Brüning, T
2014-01-01
Technical advances to analyze biological markers have generated a plethora of promising new marker candidates for early detection of cancer. However, in subsequent analyses only few could be successfully validated as being predictive, clinically useful, or effective. This failure is partially due to rapid publication of results that were detected in early stages of biomarker research. Methodological considerations are a major concern when carrying out molecular epidemiological studies of diagnostic markers to avoid errors that increase the potential for bias. Although guidelines for conducting studies and reporting of results have been published to improve the quality of marker studies, their planning and execution still need to be improved. We will discuss different sources of bias in study design, handling of specimens, and statistical analysis to illustrate possible pitfalls associated with marker research, and present legal, ethical, and technical considerations associated with storage and handling of specimens. This article presents a guide to epidemiological standards in marker research using bladder cancer as an example. Because of the possibility to detect early cancer stages due to leakage of molecular markers from the target organ or exfoliation of tumor cells into the urine, bladder cancer is particularly useful to study diagnostic markers. To improve the overall quality of marker research, future developments should focus on networks of studies and tissue banks according to uniform legal, ethical, methodological, and technical standards. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. © 2013.
A European Flagship Programme on Extreme Computing and Climate
NASA Astrophysics Data System (ADS)
Palmer, Tim
2017-04-01
In 2016, an outline proposal co-authored by a number of leading climate modelling scientists from around Europe for a (c. 1 billion euro) flagship project on exascale computing and high-resolution global climate modelling was sent to the EU via its Future and Emerging Flagship Technologies Programme. The project is formally entitled "A Flagship European Programme on Extreme Computing and Climate (EPECC)"? In this talk I will outline the reasons why I believe such a project is needed and describe the current status of the project. I will leave time for some discussion.
Lavallée-Adam, Mathieu; Rauniyar, Navin; McClatchy, Daniel B; Yates, John R
2014-12-05
The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights.
2015-01-01
The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights. PMID:25177766
Mei, Suyu; Zhu, Hao
2015-01-26
Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.
Vijay, Sonam
2014-01-01
Salivary gland proteins of Anopheles mosquitoes offer attractive targets to understand interactions with sporozoites, blood feeding behavior, homeostasis, and immunological evaluation of malaria vectors and parasite interactions. To date limited studies have been carried out to elucidate salivary proteins of An. stephensi salivary glands. The aim of the present study was to provide detailed analytical attributives of functional salivary gland proteins of urban malaria vector An. stephensi. A proteomic approach combining one-dimensional electrophoresis (1DE), ion trap liquid chromatography mass spectrometry (LC/MS/MS), and computational bioinformatic analysis was adopted to provide the first direct insight into identification and functional characterization of known salivary proteins and novel salivary proteins of An. stephensi. Computational studies by online servers, namely, MASCOT and OMSSA algorithms, identified a total of 36 known salivary proteins and 123 novel proteins analysed by LC/MS/MS. This first report describes a baseline proteomic catalogue of 159 salivary proteins belonging to various categories of signal transduction, regulation of blood coagulation cascade, and various immune and energy pathways of An. stephensi sialotranscriptome by mass spectrometry. Our results may serve as basis to provide a putative functional role of proteins in concept of blood feeding, biting behavior, and other aspects of vector-parasite host interactions for parasite development in anopheline mosquitoes. PMID:25126571
Vijay, Sonam; Rawat, Manmeet; Sharma, Arun
2014-01-01
Salivary gland proteins of Anopheles mosquitoes offer attractive targets to understand interactions with sporozoites, blood feeding behavior, homeostasis, and immunological evaluation of malaria vectors and parasite interactions. To date limited studies have been carried out to elucidate salivary proteins of An. stephensi salivary glands. The aim of the present study was to provide detailed analytical attributives of functional salivary gland proteins of urban malaria vector An. stephensi. A proteomic approach combining one-dimensional electrophoresis (1DE), ion trap liquid chromatography mass spectrometry (LC/MS/MS), and computational bioinformatic analysis was adopted to provide the first direct insight into identification and functional characterization of known salivary proteins and novel salivary proteins of An. stephensi. Computational studies by online servers, namely, MASCOT and OMSSA algorithms, identified a total of 36 known salivary proteins and 123 novel proteins analysed by LC/MS/MS. This first report describes a baseline proteomic catalogue of 159 salivary proteins belonging to various categories of signal transduction, regulation of blood coagulation cascade, and various immune and energy pathways of An. stephensi sialotranscriptome by mass spectrometry. Our results may serve as basis to provide a putative functional role of proteins in concept of blood feeding, biting behavior, and other aspects of vector-parasite host interactions for parasite development in anopheline mosquitoes.
Yang, Shuai; Zhang, Xinlei; Diao, Lihong; Guo, Feifei; Wang, Dan; Liu, Zhongyang; Li, Honglei; Zheng, Junjie; Pan, Jingshan; Nice, Edouard C; Li, Dong; He, Fuchu
2015-09-04
The Chromosome-centric Human Proteome Project (C-HPP) aims to catalog genome-encoded proteins using a chromosome-by-chromosome strategy. As the C-HPP proceeds, the increasing requirement for data-intensive analysis of the MS/MS data poses a challenge to the proteomic community, especially small laboratories lacking computational infrastructure. To address this challenge, we have updated the previous CAPER browser into a higher version, CAPER 3.0, which is a scalable cloud-based system for data-intensive analysis of C-HPP data sets. CAPER 3.0 uses cloud computing technology to facilitate MS/MS-based peptide identification. In particular, it can use both public and private cloud, facilitating the analysis of C-HPP data sets. CAPER 3.0 provides a graphical user interface (GUI) to help users transfer data, configure jobs, track progress, and visualize the results comprehensively. These features enable users without programming expertise to easily conduct data-intensive analysis using CAPER 3.0. Here, we illustrate the usage of CAPER 3.0 with four specific mass spectral data-intensive problems: detecting novel peptides, identifying single amino acid variants (SAVs) derived from known missense mutations, identifying sample-specific SAVs, and identifying exon-skipping events. CAPER 3.0 is available at http://prodigy.bprc.ac.cn/caper3.
Khorsandi, Shirin Elizabeth; Salehi, Siamak; Cortes, Miriam; Vilca-Melendez, Hector; Menon, Krishna; Srinivasan, Parthi; Prachalias, Andreas; Jassem, Wayel; Heaton, Nigel
2018-02-15
Mitochondria have their own genomic, transcriptomic and proteomic machinery but are unable to be autonomous, needing both nuclear and mitochondrial genomes. The aim of this work was to use computational biology to explore the involvement of Mitochondrial microRNAs (MitomiRs) and their interactions with the mitochondrial proteome in a clinical model of primary non function (PNF) of the donor after cardiac death (DCD) liver. Archival array data on the differential expression of miRNA in DCD PNF was re-analyzed using a number of publically available computational algorithms. 10 MitomiRs were identified of importance in DCD PNF, 7 with predicted interaction of their seed sequence with the mitochondrial transcriptome that included both coding, and non coding areas of the hypervariability region 1 (HVR1) and control region. Considering miRNA regulation of the nuclear encoded mitochondrial proteome, 7 hypothetical small proteins were identified with homolog function that ranged from co-factor for formation of ATP Synthase, REDOX balance and an importin/exportin protein. In silico, unconventional seed interactions, both non canonical and alternative seed sites, appear to be of greater importance in MitomiR regulation of the mitochondrial genome. Additionally, a number of novel small proteins of relevance in transplantation have been identified which need further characterization.
Sidoli, Simone; Cheng, Lei; Jensen, Ole N
2012-06-27
Histone proteins contribute to the maintenance and regulation of the dynamic chromatin structure, to gene activation, DNA repair and many other processes in the cell nucleus. Site-specific reversible and irreversible post-translational modifications of histone proteins mediate biological functions, including recruitment of transcription factors to specific DNA regions, assembly of epigenetic reader/writer/eraser complexes onto DNA, and modulation of DNA-protein interactions. Histones thereby regulate chromatin structure and function, propagate inheritance and provide memory functions in the cell. Dysfunctional chromatin structures and misregulation may lead to pathogenic states, including diabetes and cancer, and the mapping and quantification of multivalent post-translational modifications has therefore attracted significant interest. Mass spectrometry has quickly been accepted as a versatile tool to achieve insights into chromatin biology and epigenetics. High sensitivity and high mass accuracy and the ability to sequence post-translationally modified peptides and perform large-scale analyses make this technique very well suited for histone protein characterization. In this review we discuss a range of analytical methods and various mass spectrometry-based approaches for histone analysis, from sample preparation to data interpretation. Mass spectrometry-based proteomics is already an integrated and indispensable tool in modern chromatin biology, providing insights into the mechanisms and dynamics of nuclear and epigenetic processes. This article is part of a Special Section entitled: Understanding genome regulation and genetic diversity by mass spectrometry. Copyright © 2011 Elsevier B.V. All rights reserved.
Bioinformatics in proteomics: application, terminology, and pitfalls.
Wiemer, Jan C; Prokudin, Alexander
2004-01-01
Bioinformatics applies data mining, i.e., modern computer-based statistics, to biomedical data. It leverages on machine learning approaches, such as artificial neural networks, decision trees and clustering algorithms, and is ideally suited for handling huge data amounts. In this article, we review the analysis of mass spectrometry data in proteomics, starting with common pre-processing steps and using single decision trees and decision tree ensembles for classification. Special emphasis is put on the pitfall of overfitting, i.e., of generating too complex single decision trees. Finally, we discuss the pros and cons of the two different decision tree usages.
EPA'S TOXICOGENOMICS PARTNERSHIPS ACROSS GOVERNMENT, ACADEMIA AND INDUSTRY
Genomics, proteomics and metabonomics technologies are transforming the science of toxicology, and concurrent advances in computing and informatics are providing management and analysis solutions for this onslaught of toxicogenomic data. EPA has been actively developing an intra...
Computer Simulation of the Virulome of Bacillus anthracis Using Proteomics
2006-07-31
hypothetical protein gi|47526566 spermidine /putrescine ABC transporter, spermidine /putrescine-binding protein gi|47526625 oligoendopeptidase F, putative gi...glutamyl-trna(gln) amidotransferase, a subunit x gi|50196927 aspartate aminotransferase x gi|50196970 spermidine synthase x
Proteomic analysis of albumin and globulin fractions of pea (Pisum sativum L.) seeds.
Dziuba, Jerzy; Szerszunowicz, Iwona; Nałęcz, Dorota; Dziuba, Marta
2014-01-01
Proteomic analysis is emerging as a highly useful tool in food research, including studies of food allergies. Two-dimensional gel electrophoresis involving isoelectric focusing and sodium dodecyl sulfate polyacrylamide gel electrophoresis is the most effective method of separating hundreds or even thousands of proteins. In this study, albumin and globulin tractions of pea seeds cv. Ramrod were subjected to proteomic analysis. Selected potentially alergenic proteins were identified based on their molecular weights and isoelectric points. Pea seeds (Pisum sativum L.) cv. Ramrod harvested over a period of two years (Plant Breeding Station in Piaski-Szelejewo) were used in the experiment. The isolated albumins, globulins and legumin and vicilin fractions of globulins were separated by two-dimensional gel electrophoresis. Proteomic images were analysed in the ImageMaster 2D Platinum program with the use of algorithms from the Melanie application. The relative content, isoelectric points and molecular weights were computed for all identified proteins. Electrophoregrams were analysed by matching spot positions from three independent replications. The proteomes of albumins, globulins and legumin and vicilin fractions of globulins produced up to several hundred spots (proteins). Spots most characteristic of a given fraction were identified by computer analysis and spot matching. The albumin proteome accumulated spots of relatively high intensity over a broad range of pi values of ~4.2-8.1 in 3 molecular weight (MW) ranges: I - high molecular-weight albumins with MW of ~50-110 kDa, II - average molecular-weight albumins with MW of ~20-35 kDa, and III - low molecular-weight albumins with MW of ~13-17 kDa. 2D gel electrophoregrams revealed the presence of 81 characteristic spots, including 24 characteristic of legumin and 14 - of vicilin. Two-dimensional gel electrophoresis proved to be a useful tool for identifying pea proteins. Patterns of spots with similar isoelectric points and different molecular weights or spots with different isoelectric points and similar molecular weights play an important role in proteome analysis. The regions characteristic of albumin, globulin and legumin and vicilin fractions of globulin with typical MW and pi values were identified as the results of performed 2D electrophoretic separations of pea proteins. 2D gel electrophoresis of albumins and the vicilin fraction of globulins revealed the presence of 4 and 2 spots, respectively, representing potentially allergenic proteins. They probably corresponded to vicilin fragments synthesized during post-translational modification of the analysed protein.
Machine learning applications in proteomics research: how the past can boost the future.
Kelchtermans, Pieter; Bittremieux, Wout; De Grave, Kurt; Degroeve, Sven; Ramon, Jan; Laukens, Kris; Valkenborg, Dirk; Barsnes, Harald; Martens, Lennart
2014-03-01
Machine learning is a subdiscipline within artificial intelligence that focuses on algorithms that allow computers to learn solving a (complex) problem from existing data. This ability can be used to generate a solution to a particularly intractable problem, given that enough data are available to train and subsequently evaluate an algorithm on. Since MS-based proteomics has no shortage of complex problems, and since publicly available data are becoming available in ever growing amounts, machine learning is fast becoming a very popular tool in the field. We here therefore present an overview of the different applications of machine learning in proteomics that together cover nearly the entire wet- and dry-lab workflow, and that address key bottlenecks in experiment planning and design, as well as in data processing and analysis. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Time, space, and disorder in the expanding proteome universe.
Minde, David-Paul; Dunker, A Keith; Lilley, Kathryn S
2017-04-01
Proteins are highly dynamic entities. Their myriad functions require specific structures, but proteins' dynamic nature ranges all the way from the local mobility of their amino acid constituents to mobility within and well beyond single cells. A truly comprehensive view of the dynamic structural proteome includes: (i) alternative sequences, (ii) alternative conformations, (iii) alternative interactions with a range of biomolecules, (iv) cellular localizations, (v) alternative behaviors in different cell types. While these aspects have traditionally been explored one protein at a time, we highlight recently emerging global approaches that accelerate comprehensive insights into these facets of the dynamic nature of protein structure. Computational tools that integrate and expand on multiple orthogonal data types promise to enable the transition from a disjointed list of static snapshots to a structurally explicit understanding of the dynamics of cellular mechanisms. © 2017 The Authors. Proteomics Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Kim, Dae Hyun; Jang, Hae Gwon; Shin, Dong Sun; Kim, Sun-Ja; Yoo, Chang Young; Chung, Min Suk
2012-01-01
Science comic strips entitled Dr. Scifun were planned to promote science jobs and studies among professionals (scientists, graduate and undergraduate students) and children. To this end, the authors collected intriguing science stories as the basis of scenarios, and drew four-cut comic strips, first on paper and subsequently as computer files.…
Computer Simulation of Developmental Processes and ...
see attached presentation slides Dr. Knudsen has been invited to give a lecture at XIV International Congress of Toxicology (IUTOX) in Merida-Mexico October 2-6, 2016. He was invited to speak in a workshop on “Developmental Toxicology, Different Models, Different Endpoints” and will give a lecture entitled
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-28
... INTERNATIONAL TRADE COMMISSION [DN 2885] Certain Consumer Electronics, Including Mobile Phones and.... International Trade Commission has received a complaint entitled Certain Consumer Electronics, Including Mobile... electronics, including mobile phones and tablets. The complaint names as respondents ASUSTeK Computer, Inc. of...
Postdoctoral Fellowship Program in Educational Research. Final Technical Report.
ERIC Educational Resources Information Center
Morgan, William P.
During his postdoctoral fellowship year, Dr. Morgan took formal course work in computer programing, advanced research design, projective techniques, the physiology of aging, and hypnosis. He also attended weekly seminars in the Institute of Environmental Stress and conducted an investigation entitled "The Alteration of Perceptual and Metabolic…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd Arbogast; Steve Bryant; Clint N. Dawson
1998-08-31
This report describes briefly the work of the Center for Subsurface Modeling (CSM) of the University of Texas at Austin (and Rice University prior to September 1995) on the Partnership in Computational Sciences Consortium (PICS) project entitled Grand Challenge Problems in Environmental Modeling and Remediation: Groundwater Contaminant Transport.
Prospects for Educational Telecomputing: Selected Readings.
ERIC Educational Resources Information Center
Tinker, Robert F., Ed.; Kapisovsky, Peggy M., Ed.
The purpose of this collection of readings was to stimulate debate on the role of educational telecomputing in school reform and restructuring, and how efforts from the public and private sector can coordinate to bring about these changes. The 14 papers are entitled: (1) "Linking for Learning: Computer-and-Communications Network Support for…
Predicting Precipitation in Darwin: An Experiment with Markov Chains
ERIC Educational Resources Information Center
Boncek, John; Harden, Sig
2009-01-01
As teachers of first-year college mathematics and science students, the authors are constantly on the lookout for simple classroom exercises that improve their students' analytical and computational skills. In this article, the authors outline a project entitled "Predicting Precipitation in Darwin." In this project, students: (1) analyze…
Utilizing Modern Technology in Adult and Continuing Education Programs.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Curriculum Development.
This publication, designed as a supplement to the manual entitled "Managing Programs for Adults" (1983), provides guidelines for establishing or expanding the use of video and computers by administration and staff of adult education programs. The first section presents the use of video technology for program promotion, instruction, and staff…
ERIC Educational Resources Information Center
Selfe, Cindy, Ed.
2012-01-01
At the Computers and Writing 2011 Conference in Ann Arbor, Michigan, Gail E. Hawisher was celebrated for her many contributions to the field. At that conference, Hawisher gave a keynote address entitled "Our Work in the Profession: The Here and Now of the Future." This video publication includes contributions from scholars who wanted to share…
Code of Federal Regulations, 2013 CFR
2013-07-01
... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...
A tutorial in displaying mass spectrometry-based proteomic data using heat maps.
Key, Melissa
2012-01-01
Data visualization plays a critical role in interpreting experimental results of proteomic experiments. Heat maps are particularly useful for this task, as they allow us to find quantitative patterns across proteins and biological samples simultaneously. The quality of a heat map can be vastly improved by understanding the options available to display and organize the data in the heat map. This tutorial illustrates how to optimize heat maps for proteomics data by incorporating known characteristics of the data into the image. First, the concepts used to guide the creating of heat maps are demonstrated. Then, these concepts are applied to two types of analysis: visualizing spectral features across biological samples, and presenting the results of tests of statistical significance. For all examples we provide details of computer code in the open-source statistical programming language R, which can be used for biologists and clinicians with little statistical background. Heat maps are a useful tool for presenting quantitative proteomic data organized in a matrix format. Understanding and optimizing the parameters used to create the heat map can vastly improve both the appearance and the interoperation of heat map data.
Hoehenwarter, Wolfgang; Larhlimi, Abdelhalim; Hummel, Jan; Egelhofer, Volker; Selbig, Joachim; van Dongen, Joost T; Wienkoop, Stefanie; Weckwerth, Wolfram
2011-07-01
Mass Accuracy Precursor Alignment is a fast and flexible method for comparative proteome analysis that allows the comparison of unprecedented numbers of shotgun proteomics analyses on a personal computer in a matter of hours. We compared 183 LC-MS analyses and more than 2 million MS/MS spectra and could define and separate the proteomic phenotypes of field grown tubers of 12 tetraploid cultivars of the crop plant Solanum tuberosum. Protein isoforms of patatin as well as other major gene families such as lipoxygenase and cysteine protease inhibitor that regulate tuber development were found to be the primary source of variability between the cultivars. This suggests that differentially expressed protein isoforms modulate genotype specific tuber development and the plant phenotype. We properly assigned the measured abundance of tryptic peptides to different protein isoforms that share extensive stretches of primary structure and thus inferred their abundance. Peptides unique to different protein isoforms were used to classify the remaining peptides assigned to the entire subset of isoforms based on a common abundance profile using multivariate statistical procedures. We identified nearly 4000 proteins which we used for quantitative functional annotation making this the most extensive study of the tuber proteome to date.
Marc Snir | Argonne National Laboratory
Molecular biology Proteomics Environmental science & technology Air quality Atmospheric & climate , H.S., Jr., Demonstrating the scalability of a molecular dynamics application on a Petaflop computer Transformations IGSBInstitute for Genomics and Systems Biology IMEInstitute for Molecular Engineering JCESRJoint
EPA SCIENCE FORUM - EPA'S TOXICOGENOMICS PARTNERSHIPS ACROSS GOVERNMENT, ACADEMIA AND INDUSTRY
Over the past decade genomics, proteomics and metabonomics technologies have transformed the science of toxicology, and concurrent advances in computing and informatics have provided management and analysis solutions for this onslaught of toxicogenomic data. EPA has been actively...
Nesvizhskii, Alexey I.
2010-01-01
This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881
1990-12-07
Fundaqao Calouste Gulbenkian, Instituto Gulbenkian de Ci~ncia, Centro de C6lculo Cientifico , Coimbra, 1973. 28, Dirac, P. A. M., Spinors in Hilbert Space...Office of Scientific Research grants 1965 Mathematical Association of America Editorial Prize for the article entitled: "Linear Transformations on...matrices" 1966 L.R. Ford Memorial Prize awarded by the Mathematical Association of America for the article , "Permanents" 1989 Outstanding Computer
NASA Technical Reports Server (NTRS)
Makivic, Miloje S.
1996-01-01
This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.
Pietrowska, M; Marczak, L; Polanska, J; Nowicka, E; Behrent, K; Tarnawski, R; Stobiecki, M; Polanski, A; Widlak, P
2010-01-01
Mass spectrometry-based analysis of the serum proteome allows identifying multi-peptide patterns/signatures specific for blood of cancer patients, thus having high potential value for cancer diagnostics. However, because of problems with optimization and standardization of experimental and computational design, none of identified proteome patterns/signatures was approved for diagnostics in clinical practice as yet. Here we compared two methods of serum sample preparation for mass spectrometry-based proteome pattern analysis aimed to identify biomarkers that could be used in early detection of breast cancer patients. Blood samples were collected in a group of 92 patients diagnosed at early (I and II) stages of the disease before the start of therapy, and in a group of age-matched healthy controls (104 women). Serum specimens were purified and analyzed using MALDI-ToF spectrometry, either directly or after membrane filtration (50 kDa cut-off) to remove albumin and other large serum proteins. Mass spectra of the low-molecular-weight fraction (2-10 kDa) of the serum proteome were resolved using the Gaussian mixture decomposition, and identified spectral components were used to build classifiers that differentiated samples from breast cancer patients and healthy persons. Mass spectra of complete serum and membrane-filtered albumin-depleted samples have apparently different structure and peaks specific for both types of samples could be identified. The optimal classifier built for the complete serum specimens consisted of 8 spectral components, and had 81% specificity and 72% sensitivity, while that built for the membrane-filtered samples consisted of 4 components, and had 80% specificity and 81% sensitivity. We concluded that pre-processing of samples to remove albumin might be recommended before MALDI-ToF mass spectrometric analysis of the low-molecular-weight components of human serum Keywords: albumin removal; breast cancer; clinical proteomics; mass spectrometry; pattern analysis; serum proteome.
High throughput profile-profile based fold recognition for the entire human proteome.
McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T
2006-06-07
In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
Decoding 2D-PAGE complex maps: relevance to proteomics.
Pietrogrande, Maria Chiara; Marchetti, Nicola; Dondi, Francesco; Righetti, Pier Giorgio
2006-03-20
This review describes two mathematical approaches useful for decoding the complex signal of 2D-PAGE maps of protein mixtures. These methods are helpful for interpreting the large amount of data of each 2D-PAGE map by extracting all the analytical information hidden therein by spot overlapping. Here the basic theory and application to 2D-PAGE maps are reviewed: the means for extracting information from the experimental data and their relevance to proteomics are discussed. One method is based on the quantitative theory of statistical model of peak overlapping (SMO) using the spot experimental data (intensity and spatial coordinates). The second method is based on the study of the 2D-autocovariance function (2D-ACVF) computed on the experimental digitised map. They are two independent methods that are able to extract equal and complementary information from the 2D-PAGE map. Both methods permit to obtain fundamental information on the sample complexity and the separation performance and to single out ordered patterns present in spot positions: the availability of two independent procedures to compute the same separation parameters is a powerful tool to estimate the reliability of the obtained results. The SMO procedure is an unique tool to quantitatively estimate the degree of spot overlapping present in the map, while the 2D-ACVF method is particularly powerful in simply singling out the presence of order in the spot position from the complexity of the whole 2D map, i.e., spot trains. The procedures were validated by extensive numerical computation on computer-generated maps describing experimental 2D-PAGE gels of protein mixtures. Their applicability to real samples was tested on reference maps obtained from literature sources. The review describes the most relevant information for proteomics: sample complexity, separation performance, overlapping extent, identification of spot trains related to post-translational modifications (PTMs).
Wang, Guanghui; Wu, Wells W; Zeng, Weihua; Chou, Chung-Lin; Shen, Rong-Fong
2006-05-01
A critical step in protein biomarker discovery is the ability to contrast proteomes, a process referred generally as quantitative proteomics. While stable-isotope labeling (e.g., ICAT, 18O- or 15N-labeling, or AQUA) remains the core technology used in mass spectrometry-based proteomic quantification, increasing efforts have been directed to the label-free approach that relies on direct comparison of peptide peak areas between LC-MS runs. This latter approach is attractive to investigators for its simplicity as well as cost effectiveness. In the present study, the reproducibility and linearity of using a label-free approach to highly complex proteomes were evaluated. Various amounts of proteins from different proteomes were subjected to repeated LC-MS analyses using an ion trap or Fourier transform mass spectrometer. Highly reproducible data were obtained between replicated runs, as evidenced by nearly ideal Pearson's correlation coefficients (for ion's peak areas or retention time) and average peak area ratios. In general, more than 50% and nearly 90% of the peptide ion ratios deviated less than 10% and 20%, respectively, from the average in duplicate runs. In addition, the multiplicity ratios of the amounts of proteins used correlated nicely with the observed averaged ratios of peak areas calculated from detected peptides. Furthermore, the removal of abundant proteins from the samples led to an improvement in reproducibility and linearity. A computer program has been written to automate the processing of data sets from experiments with groups of multiple samples for statistical analysis. Algorithms for outlier-resistant mean estimation and for adjusting statistical significance threshold in multiplicity of testing were incorporated to minimize the rate of false positives. The program was applied to quantify changes in proteomes of parental and p53-deficient HCT-116 human cells and found to yield reproducible results. Overall, this study demonstrates an alternative approach that allows global quantification of differentially expressed proteins in complex proteomes. The utility of this method to biomarker discovery is likely to synergize with future improvements in the detecting sensitivity of mass spectrometers.
Enhancing the Attractiveness of Alcohol Education Via a Microcomputer Program.
ERIC Educational Resources Information Center
Meier, Scott T.
Getting students' attention is one of the most difficult problems for counselors who conduct alcohol education programs in high schools or colleges. A computer-aided instruction program using microcomputers for alcohol education was developed entitled "If You Drink: An Alcohol Education Program" (IYD). The IYD program consists of five modules: the…
The Revolution in Print Technology. Text & Readers Programme, Technical Report #1.
ERIC Educational Resources Information Center
Macdonald-Ross, Michael
The two papers presented in this document discuss aspects of the computer revolution and its effects on the production of print materials. The papers are addressed to readers who are educators rather than technologists. The first article, entitled "Print," interprets that term broadly to include text development and production, and…
Computer Assisted Instruction in Teacher Education: A Full Length Course.
ERIC Educational Resources Information Center
Cartwright, G. Phillip
Pennsylvania State University has developed, evaluated, and implemented a series of modules and an entire three-credit teacher education course which is offered completely by microcomputer. The course is entitled "Educating Special Learners." The modules use the Apple II series and the IBM PC series. Evaluation of the course, based on…
Alexander Meets Michotte: A Simulation Tool Based on Pattern Programming and Phenomenology
ERIC Educational Resources Information Center
Basawapatna, Ashok
2016-01-01
Simulation and modeling activities, a key point of computational thinking, are currently not being integrated into the science classroom. This paper describes a new visual programming tool entitled the Simulation Creation Toolkit. The Simulation Creation Toolkit is a high level pattern-based phenomenological approach to bringing rapid simulation…
75 FR 1584 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-12
... Food Service Program Claim for Reimbursement Form is used to collect meal and cost data from sponsors to determine the reimbursement entitlement for meals served. The form is sent to the Food and... payment system computes earnings to date and the number of meals to date and generates payments for the...
ERIC Educational Resources Information Center
Shaltz, Mark B.
An experiment was conducted that compared the teaching effectiveness of a computer assisted instructional module and a lecture-discussion. The module, Predator Functional Response (PFR), was developed as part of the SUMIT (Single-concept User-adaptable Microcomputer-based Instructional Technique) project. A class of 30 students was randomly…
Code of Federal Regulations, 2011 CFR
2011-01-01
... shall be entitled to overtime pay computed on the average rate of basic pay for all regularly scheduled... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Overtime pay. 532.503 Section 532.503... Pay and Differentials § 532.503 Overtime pay. (a)(1) Employees who are exempt from the overtime pay...
A Course Which Used Programming to Aid Learning Various Mathematical Concepts.
ERIC Educational Resources Information Center
Day, Jane M.
A three unit mathematics course entitled Introduction to Computing evaluated the effectiveness of programing as an aid to learning math concepts and to developing student self-reliance. Sixteen students enrolled in the course at the College of Notre Dame in Belmont, California; one terminal was available, connected to the Stanford Computation…
10 CFR 13.27 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
...; and (2) By 11:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49153, Aug. 28... the calculation of additional days when a participant is not entitled to receive an entire filing... same filing and service method, the number of days for service will be determined by the presiding...
10 CFR 2.306 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
...:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49151, Aug. 28, 2007] ... the calculation of additional days when a participant is not entitled to receive an entire filing... filing and service method, the number of days for service will be determined by the presiding officer...
2013-01-01
Background The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing - the matching of peptide measurements across samples. Results We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Conclusions Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods. PMID:24341404
Benjamin, Ashlee M; Thompson, J Will; Soderblom, Erik J; Geromanos, Scott J; Henao, Ricardo; Kraus, Virginia B; Moseley, M Arthur; Lucas, Joseph E
2013-12-16
The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing--the matching of peptide measurements across samples. We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods.
2017-01-01
The changes of protein expression that are monitored in proteomic experiments are a type of biological transformation that also involves changes in chemical composition. Accompanying the myriad molecular-level interactions that underlie any proteomic transformation, there is an overall thermodynamic potential that is sensitive to microenvironmental conditions, including local oxidation and hydration potential. Here, up- and down-expressed proteins identified in 71 comparative proteomics studies were analyzed using the average oxidation state of carbon (ZC) and water demand per residue (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${\\overline{n}}_{{\\mathrm{H}}_{2}\\mathrm{O}}$\\end{document}n¯H2O), calculated using elemental abundances and stoichiometric reactions to form proteins from basis species. Experimental lowering of oxygen availability (hypoxia) or water activity (hyperosmotic stress) generally results in decreased ZC or \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${\\overline{n}}_{{\\mathrm{H}}_{2}\\mathrm{O}}$\\end{document}n¯H2O of up-expressed compared to down-expressed proteins. This correspondence of chemical composition with experimental conditions provides evidence for attraction of the proteomes to a low-energy state. An opposite compositional change, toward higher average oxidation or hydration state, is found for proteomic transformations in colorectal and pancreatic cancer, and in two experiments for adipose-derived stem cells. Calculations of chemical affinity were used to estimate the thermodynamic potentials for proteomic transformations as a function of fugacity of O2 and activity of H2O, which serve as scales of oxidation and hydration potential. Diagrams summarizing the relative potential for formation of up- and down-expressed proteins have predicted equipotential lines that cluster around particular values of oxygen fugacity and water activity for similar datasets. The changes in chemical composition of proteomes are likely linked with reactions among other cellular molecules. A redox balance calculation indicates that an increase in the lipid to protein ratio in cancer cells by 20% over hypoxic cells would generate a large enough electron sink for oxidation of the cancer proteomes. The datasets and computer code used here are made available in a new R package, canprot. PMID:28603672
Rondinone, Cristina M
2005-04-01
The 6th annual conference on diabetes, organised by the SMI group, was held on 18th-19th October 2004 in London, followed by a one-day symposium on an executive briefing entitled Type 2 diabetes and beyond: the untapped commercial potential. More than 100 delegates from both academic and industrial institutes attended the two meetings. The presentations provided insights into the understanding of mechanisms and developments of novel drugs for treatments of insulin resistance, diabetes, and metabolic syndrome, as well as new approaches for therapeutic intervention including the development of dipeptidyl peptidase IV inhibitors and glucagon-like peptide-1 analogues. This review offers a general overview of the fields in metabolic diseases and different strategies to develop new drugs. Discussions focused on several emerging therapeutic areas, including novel compound developments and target identification with the use of conventional methods and recently emerged technologies, such as siRNA, genomics and proteomics.
Yap, Karen; Makeyev, Eugene V
2013-09-01
Eukaryotic gene expression is orchestrated on a genome-wide scale through several post-transcriptional mechanisms. Of these, alternative pre-mRNA splicing expands the proteome diversity and modulates mRNA stability through downstream RNA quality control (QC) pathways including nonsense-mediated decay (NMD) of mRNAs containing premature termination codons and nuclear retention and elimination (NRE) of intron-containing transcripts. Although originally identified as mechanisms for eliminating aberrant transcripts, a growing body of evidence suggests that NMD and NRE coupled with deliberate changes in pre-mRNA splicing patterns are also used in a number of biological contexts for deterministic control of gene expression. Here we review recent studies elucidating molecular mechanisms and biological significance of these gene regulation strategies with a specific focus on their roles in nervous system development and physiology. This article is part of a Special Issue entitled 'RNA and splicing regulation in neurodegeneration'. Copyright © 2013 Elsevier Inc. All rights reserved.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
Light scattering by planetary-regolith analog samples: computational results
NASA Astrophysics Data System (ADS)
Väisänen, Timo; Markkanen, Johannes; Hadamcik, Edith; Levasseur-Regourd, Anny-Chantal; Lasue, Jeremie; Blum, Jürgen; Penttilä, Antti; Muinonen, Karri
2017-04-01
We compute light scattering by a planetary-regolith analog surface. The corresponding experimental work is from Hadamcik et al. [1] with the PROGRA2-surf [2] device measuring the polarization of dust particles. The analog samples are low density (volume fraction 0.15 ± 0.03) agglomerates produced by random ballistic deposition of almost equisized silica spheres (refractive index n=1.5 and diameter 1.45 ± 0.06 µm). Computations are carried out with the recently developed codes entitled Radiative Transfer with Reciprocal Transactions (R2T2) and Radiative Transfer Coherent Backscattering with incoherent interactions (RT-CB-ic). Both codes incorporate the so-called incoherent treatment which enhances the applicability of the radiative transfer as shown by Muinonen et al. [3]. As a preliminary result, we have computed scattering from a large spherical medium with the RT-CB-ic using equal-sized particles with diameters of 1.45 microns. The preliminary results have shown that the qualitative characteristics are similar for the computed and measured intensity and polarization curves but that there are still deviations between the characteristics. We plan to remove the deviations by incorporating a size distribution of particles (1.45 ± 0.02 microns) and detailed information about the volume density profile within the analog surface. Acknowledgments: We acknowledge the ERC Advanced Grant no. 320773 entitled Scattering and Absorption of Electromagnetic Waves in Particulate Media (SAEMPL). Computational resources were provided by CSC - IT Centre for Science Ltd, Finland. References: [1] Hadamcik E. et al. (2007), JQSRT, 106, 74-89 [2] Levasseur-Regourd A.C. et al. (2015), Polarimetry of stars and planetary systems, CUP, 61-80 [3] Muinonen K. et al. (2016), extended abstract for EMTS.
Comparative proteome analysis reveals pathogen specific outer membrane proteins of Leptospira.
Dhandapani, Gunasekaran; Sikha, Thoduvayil; Rana, Aarti; Brahma, Rahul; Akhter, Yusuf; Gopalakrishnan Madanan, Madathiparambil
2018-04-10
Proteomes of pathogenic Leptospira interrogans and L. borgpetersenii and the saprophytic L. biflexa were filtered through computational tools to identify Outer Membrane Proteins (OMPs) that satisfy the required biophysical parameters for their presence on the outer membrane. A total of 133, 130, and 144 OMPs were identified in L. interrogans, L. borgpetersenii, and L. biflexa, respectively, which forms approximately 4% of proteomes. A holistic analysis of transporting and pathogenic characteristics of OMPs together with Clusters of Orthologous Groups (COGs) among the OMPs and their distribution across 3 species was made and put forward a set of 21 candidate OMPs specific to pathogenic leptospires. It is also found that proteins homologous to the candidate OMPs were also present in other pathogenic species of leptospires. Six OMPs from L. interrogans and 2 from L. borgpetersenii observed to have similar COGs while those were not found in any intermediate or saprophytic forms. These OMPs appears to have role in infection and pathogenesis and useful for anti-leptospiral strategies. © 2018 Wiley Periodicals, Inc.
Building high-quality assay libraries for targeted analysis of SWATH MS data.
Schubert, Olga T; Gillet, Ludovic C; Collins, Ben C; Navarro, Pedro; Rosenberger, George; Wolski, Witold E; Lam, Henry; Amodei, Dario; Mallick, Parag; MacLean, Brendan; Aebersold, Ruedi
2015-03-01
Targeted proteomics by selected/multiple reaction monitoring (S/MRM) or, on a larger scale, by SWATH (sequential window acquisition of all theoretical spectra) MS (mass spectrometry) typically relies on spectral reference libraries for peptide identification. Quality and coverage of these libraries are therefore of crucial importance for the performance of the methods. Here we present a detailed protocol that has been successfully used to build high-quality, extensive reference libraries supporting targeted proteomics by SWATH MS. We describe each step of the process, including data acquisition by discovery proteomics, assertion of peptide-spectrum matches (PSMs), generation of consensus spectra and compilation of MS coordinates that uniquely define each targeted peptide. Crucial steps such as false discovery rate (FDR) control, retention time normalization and handling of post-translationally modified peptides are detailed. Finally, we show how to use the library to extract SWATH data with the open-source software Skyline. The protocol takes 2-3 d to complete, depending on the extent of the library and the computational resources available.
Fischer, Martina; Jehmlich, Nico; Rose, Laura; Koch, Sophia; Laue, Michael; Renard, Bernhard Y.; Schmidt, Frank; Heuer, Dagmar
2015-01-01
Chlamydia trachomatis is an important human pathogen that replicates inside the infected host cell in a unique vacuole, the inclusion. The formation of this intracellular bacterial niche is essential for productive Chlamydia infections. Despite its importance for Chlamydia biology, a holistic view on the protein composition of the inclusion, including its membrane, is currently missing. Here we describe the host cell-derived proteome of isolated C. trachomatis inclusions by quantitative proteomics. Computational analysis indicated that the inclusion is a complex intracellular trafficking platform that interacts with host cells’ antero- and retrograde trafficking pathways. Furthermore, the inclusion is highly enriched for sorting nexins of the SNX-BAR retromer, a complex essential for retrograde trafficking. Functional studies showed that in particular, SNX5 controls the C. trachomatis infection and that retrograde trafficking is essential for infectious progeny formation. In summary, these findings suggest that C. trachomatis hijacks retrograde pathways for effective infection. PMID:26042774
A novel strategy for global analysis of the dynamic thiol redox proteome.
Martínez-Acedo, Pablo; Núñez, Estefanía; Gómez, Francisco J Sánchez; Moreno, Margoth; Ramos, Elena; Izquierdo-Álvarez, Alicia; Miró-Casas, Elisabet; Mesa, Raquel; Rodriguez, Patricia; Martínez-Ruiz, Antonio; Dorado, David Garcia; Lamas, Santiago; Vázquez, Jesús
2012-09-01
Nitroxidative stress in cells occurs mainly through the action of reactive nitrogen and oxygen species (RNOS) on protein thiol groups. Reactive nitrogen and oxygen species-mediated protein modifications are associated with pathophysiological states, but can also convey physiological signals. Identification of Cys residues that are modified by oxidative stimuli still poses technical challenges and these changes have never been statistically analyzed from a proteome-wide perspective. Here we show that GELSILOX, a method that combines a robust proteomics protocol with a new computational approach that analyzes variance at the peptide level, allows a simultaneous analysis of dynamic alterations in the redox state of Cys sites and of protein abundance. GELSILOX permits the characterization of the major endothelial redox targets of hydrogen peroxide in endothelial cells and reveals that hypoxia induces a significant increase in the status of oxidized thiols. GELSILOX also detected thiols that are redox-modified by ischemia-reperfusion in heart mitochondria and demonstrated that these alterations are abolished in ischemia-preconditioned animals.
Loch, Christian M; Strickler, James E
2012-11-01
Substrate ubiquitylation is a reversible process critical to cellular homeostasis that is often dysregulated in many human pathologies including cancer and neurodegeneration. Elucidating the mechanistic details of this pathway could unlock a large store of information useful to the design of diagnostic and therapeutic interventions. Proteomic approaches to the questions at hand have generally utilized mass spectrometry (MS), which has been successful in identifying both ubiquitylation substrates and profiling pan-cellular chain linkages, but is generally unable to connect the two. Interacting partners of the deubiquitylating enzymes (DUBs) have also been reported by MS, although substrates of catalytically competent DUBs generally cannot be. Where they have been used towards the study of ubiquitylation, protein microarrays have usually functioned as platforms for the identification of substrates for specific E3 ubiquitin ligases. Here, we report on the first use of protein microarrays to identify substrates of DUBs, and in so doing demonstrate the first example of microarray proteomics involving multiple (i.e., distinct, sequential and opposing) enzymatic activities. This technique demonstrates the selectivity of DUBs for both substrate and type (mono- versus poly-) of ubiquitylation. This work shows that the vast majority of DUBs are monoubiquitylated in vitro, and are incapable of removing this modification from themselves. This work also underscores the critical role of utilizing both ubiquitin chains and substrates when attempting to characterize DUBs. This article is part of a Special Issue entitled: Ubiquitin Drug Discovery and Diagnostics. Copyright © 2012 Elsevier B.V. All rights reserved.
Mistranslation: from adaptations to applications.
Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J
2017-11-01
The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.
Linkage Of Exposure And Effects Using Genomics, Proteomics, And Metabolomics In Small Fish Models
Poster for the BOSC Computational Toxicology Research Program review. Knowledge of possible toxic mechanisms/modes of action (MOA) of chemicals can provide valuable insights as to appropriate methods for assessing exposure and effects, thereby reducing uncertainties related to e...
EMERGING MOLECULAR COMPUTATIONAL APPROACHES FOR CROSS-SPECIES EXTRAPOLATIONS: A WORKSHOP SUMMARY
Advances in molecular technology have led to the elucidation of full genomic sequences of several multicellular organisms, ranging from nematodes to man. The related molecular field of proteomics and metabolomics are now beginning to advance rapidly as well. In addition, advances...
The Perseus computational platform for comprehensive analysis of (prote)omics data.
Tyanova, Stefka; Temu, Tikira; Sinitcyn, Pavel; Carlson, Arthur; Hein, Marco Y; Geiger, Tamar; Mann, Matthias; Cox, Jürgen
2016-09-01
A main bottleneck in proteomics is the downstream biological analysis of highly multivariate quantitative protein abundance data generated using mass-spectrometry-based analysis. We developed the Perseus software platform (http://www.perseus-framework.org) to support biological and biomedical researchers in interpreting protein quantification, interaction and post-translational modification data. Perseus contains a comprehensive portfolio of statistical tools for high-dimensional omics data analysis covering normalization, pattern recognition, time-series analysis, cross-omics comparisons and multiple-hypothesis testing. A machine learning module supports the classification and validation of patient groups for diagnosis and prognosis, and it also detects predictive protein signatures. Central to Perseus is a user-friendly, interactive workflow environment that provides complete documentation of computational methods used in a publication. All activities in Perseus are realized as plugins, and users can extend the software by programming their own, which can be shared through a plugin store. We anticipate that Perseus's arsenal of algorithms and its intuitive usability will empower interdisciplinary analysis of complex large data sets.
On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.
Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore
2012-03-21
In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.
On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.
Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore
2012-03-01
In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.
Colzani, Mara; Aldini, Giancarlo; Carini, Marina
2013-10-30
Our current knowledge of the occurrence of proteins covalently modified by reactive carbonyl species (RCS) generated by lipid peroxidation indicates their involvement as pathogenic factors associated with several chronic degenerative diseases. Proteomics and mass spectrometry (MS) in the last decade have played a fundamental role in this context, allowing the demonstration of the formation of RCS-protein adducts in vitro and in vivo under different experimental conditions. In conjunction with functional and computational studies, MS has been widely applied in vitro to study the stoichiometry of the protein-RCS adduct formation, and, by identifying the site(s) of modification, to elucidate the molecular mechanisms of protein carbonylation and the physiologic impact of such modification on protein function. This review will provide an update of the MS methods commonly used in detecting and characterizing protein modification by RCS generated by lipid peroxidation, among which 4-hydroxy-trans-2-nonenal and acrolein represent the most studied and cytotoxic compounds. Research in this field, employing state-of-the-art MS, is rapidly and continuously evolving, owing also to the development of suitable derivatization and enrichment procedures enabling the improve MS detectability of RCS-protein adducts in complex biological matrices. By considering the emerging role of RCS in several human diseases, unequivocal analytical approaches by MS are needed to provide levels of intermediate diagnostic biomarkers for human diseases. This review focuses also on the different MS-based approaches so far developed for RCS-protein adduct quantification. This article is part of a Special Issue entitled: Posttranslational Protein modifications in biology and Medicine. Copyright © 2013 Elsevier B.V. All rights reserved.
ProteoWizard: open source software for rapid proteomics tools development.
Kessner, Darren; Chambers, Matt; Burke, Robert; Agus, David; Mallick, Parag
2008-11-01
The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged.
Voit, Eberhard O
2014-01-01
Probably the most prominent expectation associated with systems biology is the computational support of personalized medicine and predictive health. At least some of this anticipated support is envisioned in the form of disease simulators that will take hundreds of personalized biomarker data as input and allow the physician to explore and optimize possible treatment regimens on a computer before the best treatment is applied to the actual patient in a custom-tailored manner. The key prerequisites for such simulators are mathematical and computational models that not only manage the input data and implement the general physiological and pathological principles of organ systems but also integrate the myriads of details that affect their functionality to a significant degree. Obviously, the construction of such models is an overwhelming task that suggests the long-term development of hierarchical or telescopic approaches representing the physiology of organs and their diseases, first coarsely and over time with increased granularity. This article illustrates the rudiments of such a strategy in the context of cystic fibrosis (CF) of the lung. The starting point is a very simplistic, generic model of inflammation, which has been shown to capture the principles of infection, trauma, and sepsis surprisingly well. The adaptation of this model to CF contains as variables healthy and damaged cells, as well as different classes of interacting cytokines and infectious microbes that are affected by mucus formation, which is the hallmark symptom of the disease (Perez-Vilar and Boucher, 2004) [1]. The simple model represents the overall dynamics of the disease progression, including so-called acute pulmonary exacerbations, quite well, but of course does not provide much detail regarding the specific processes underlying the disease. In order to launch the next level of modeling with finer granularity, it is desirable to determine which components of the coarse model contribute most to the disease dynamics. The article introduces for this purpose the concept of module gains or ModGains, which quantify the sensitivity of key disease variables in the higher-level system. In reality, these variables represent complex modules at the next level of granularity, and the computation of ModGains therefore allows an importance ranking of variables that should be replaced with more detailed models. The "hot-swapping" of such detailed modules for former variables is greatly facilitated by the architecture and implementation of the overarching, coarse model structure, which is here formulated with methods of biochemical systems theory (BST). This article is part of a Special Issue entitled: Computational Proteomics, Systems Biology & Clinical Implications. Guest Editor: Yudong Cai. Copyright © 2013 Elsevier B.V. All rights reserved.
20 CFR 404.141 - How we credit quarters of coverage for calendar years before 1978.
Code of Federal Regulations, 2010 CFR
2010-04-01
... FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Insured Status and Quarters of Coverage... (or might not meet as early in the year as otherwise possible) the requirements to be fully or currently insured, to be entitled to a computation or recomputation of your primary insurance amount, or to...
Accelerated Mathematics and High-Ability Students' Math Achievement in Grades Three and Four
ERIC Educational Resources Information Center
Stanley, Ashley M.
2011-01-01
The purpose of this study was to explore the relationship between the use of a computer-managed integrated learning system entitled Accelerated Math (AM) as a supplement to traditional mathematics instruction on achievement as measured by TerraNova achievement tests of third and fourth grade high-ability students. Gender, socioeconomic status, and…
ERIC Educational Resources Information Center
Murphy, Jo-Anne
For a school year, a language arts software program was used to help special needs children in Marblehead, Massachusetts who represented a range of learning disabilities and emotional, behavioral and physical disorders of varying degrees of severity. The program had three major components, entitled "Nouns,""Verbs," and "Adjectives." These…
ERIC Educational Resources Information Center
Pukkaew, Chadchadaporn
2013-01-01
This study assesses the effectiveness of internet-based distance learning (IBDL) through the VClass live e-education platform. The research examines (1) the effectiveness of IBDL for regular and distance students and (2) the distance students' experience of VClass in the IBDL course entitled Computer Programming 1. The study employed the common…
Decreasing Excessive Media Usage while Increasing Physical Activity: A Single-Subject Research Study
ERIC Educational Resources Information Center
Larwin, Karen H.; Larwin, David A.
2008-01-01
The Kaiser Family Foundation released a report entitled "Kids and Media Use" in the United States that concluded that children's use of media--including television, computers, Internet, video games, and phones--may be one of the primary contributor's to the poor fitness and obesity of many of today's adolescents. The present study examines the…
Scientific American Frontiers Teaching Guides for Shows 701-705, October 1996-April 1997.
ERIC Educational Resources Information Center
Connecticut Public Television, Hartford.
These teaching guides are meant to supplement the seventh season (1996-97) of the PBS Series "Scientific American Frontiers". Episode 701 is entitled "Inventing the Future: A Tour of the MIT Media Lab" and the teaching guide contains information and activities on a virtual pet dog, computers of the future, a smart car designed…
26 CFR 1.668(b)-2A - Special rules applicable to section 668.
Code of Federal Regulations, 2010 CFR
2010-04-01
... accumulation distribution. To compute A's tax under the exact method for 1974 on the $10,000 from the 1980... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Treatment of Excess Distributions of Trusts Applicable to... section 141 for such calendar year, and (6) Was entitled to the personal exemption under section 151 or...
ERIC Educational Resources Information Center
Gose, Ben
1997-01-01
A Mount Holyoke College (Massachusetts) class on computer applications in history and the humanities, entitled "Frankenstein Meets Multimedia," uses topics from the 1818 novel as the basis for students to develop multimedia compact disks about it. The novel is used because its author was heavily influenced by the philosophy of the…
Computational Prediction of Protein-Protein Interactions
Ehrenberger, Tobias; Cantley, Lewis C.; Yaffe, Michael B.
2015-01-01
The prediction of protein-protein interactions and kinase-specific phosphorylation sites on individual proteins is critical for correctly placing proteins within signaling pathways and networks. The importance of this type of annotation continues to increase with the continued explosion of genomic and proteomic data, particularly with emerging data categorizing posttranslational modifications on a large scale. A variety of computational tools are available for this purpose. In this chapter, we review the general methodologies for these types of computational predictions and present a detailed user-focused tutorial of one such method and computational tool, Scansite, which is freely available to the entire scientific community over the Internet. PMID:25859943
Computational biology for ageing
Wieser, Daniela; Papatheodorou, Irene; Ziehm, Matthias; Thornton, Janet M.
2011-01-01
High-throughput genomic and proteomic technologies have generated a wealth of publicly available data on ageing. Easy access to these data, and their computational analysis, is of great importance in order to pinpoint the causes and effects of ageing. Here, we provide a description of the existing databases and computational tools on ageing that are available for researchers. We also describe the computational approaches to data interpretation in the field of ageing including gene expression, comparative and pathway analyses, and highlight the challenges for future developments. We review recent biological insights gained from applying bioinformatics methods to analyse and interpret ageing data in different organisms, tissues and conditions. PMID:21115530
Towards the formal specification of the requirements and design of a processor interface unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
Work to formally specify the requirements and design of a Processor Interface Unit (PIU), a single-chip subsystem providing memory interface, bus interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system, is described. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance free operation, or both. The approaches that were developed for modeling the PIU requirements and for composition of the PIU subcomponents at high levels of abstraction are described. These approaches were used to specify and verify a nontrivial subset of the PIU behavior. The PIU specification in Higher Order Logic (HOL) is documented in a companion NASA contractor report entitled 'Towards the Formal Specification of the Requirements and Design of a Processor Interfacs Unit - HOL Listings.' The subsequent verification approach and HOL listings are documented in NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit' and NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings.'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.
Garrett, Daniel S; Gronenborn, Angela M; Clore, G Marius
2011-12-01
The Contour Approach to Peak Picking was developed to aid in the analysis and interpretation and of multidimensional NMR spectra of large biomolecules. In essence, it comprises an interactive graphics software tool to computationally select resonance positions in heteronuclear, 3- and 4D spectra. Copyright © 2011. Published by Elsevier Inc.
MacGilvray, Matthew E; Shishkova, Evgenia; Chasman, Deborah; Place, Michael; Gitter, Anthony; Coon, Joshua J; Gasch, Audrey P
2018-05-01
Cells respond to stressful conditions by coordinating a complex, multi-faceted response that spans many levels of physiology. Much of the response is coordinated by changes in protein phosphorylation. Although the regulators of transcriptome changes during stress are well characterized in Saccharomyces cerevisiae, the upstream regulatory network controlling protein phosphorylation is less well dissected. Here, we developed a computational approach to infer the signaling network that regulates phosphorylation changes in response to salt stress. We developed an approach to link predicted regulators to groups of likely co-regulated phospho-peptides responding to stress, thereby creating new edges in a background protein interaction network. We then use integer linear programming (ILP) to integrate wild type and mutant phospho-proteomic data and predict the network controlling stress-activated phospho-proteomic changes. The network we inferred predicted new regulatory connections between stress-activated and growth-regulating pathways and suggested mechanisms coordinating metabolism, cell-cycle progression, and growth during stress. We confirmed several network predictions with co-immunoprecipitations coupled with mass-spectrometry protein identification and mutant phospho-proteomic analysis. Results show that the cAMP-phosphodiesterase Pde2 physically interacts with many stress-regulated transcription factors targeted by PKA, and that reduced phosphorylation of those factors during stress requires the Rck2 kinase that we show physically interacts with Pde2. Together, our work shows how a high-quality computational network model can facilitate discovery of new pathway interactions during osmotic stress.
Yan, Fang; Liu, Haihong; Liu, Zengrong
2014-01-01
P53 and E2F1 are critical transcription factors involved in the choices between different cell fates including cell differentiation, cell cycle arrest or apoptosis. Recent experiments have shown that two families of microRNAs (miRNAs), p53-responsive miR34 (miRNA-34 a, b and c) and E2F1-inducible miR449 (miRNA-449 a, b and c) are potent inducers of these different fates and might have an important role in sensitizing cancer cells to drug treatment and tumor suppression. Identifying the mechanisms responsible for the combinatorial regulatory roles of these two transcription factors and two miRNAs is an important and challenging problem. Here, based in part on the model proposed in Tongli Zhang et al. (2007), we developed a mathematical model of the decision process and explored the combinatorial regulation between these two transcription factors and two miRNAs in response to DNA damage. By analyzing nonlinear dynamic behaviors of the model, we found that p53 exhibits pulsatile behavior. Moreover, a comparison is given to reveal the subtle differences of the cell fate decision process between regulation and deregulation of miR34 on E2F1. It predicts that miR34 plays a critical role in promoting cell cycle arrest. In addition, a computer simulation result also predicts that the miR449 is necessary for apoptosis in response to sustained DNA damage. In agreement with experimental observations, our model can account for the intricate regulatory relationship between these two transcription factors and two miRNAs in the cell fate decision process after DNA damage. These theoretical results indicate that miR34 and miR449 are effective tumor suppressors and play critical roles in cell fate decisions. The work provides a dynamic mechanism that shows how cell fate decisions are coordinated by two transcription factors and two miRNAs. This article is part of a Special Issue entitled: Computational Proteomics, Systems Biology and Clinical Implications. Guest Editor: Yudong Cai. Crown Copyright © 2013. All rights reserved.
Academic Entitlement and Academic Performance in Graduating Pharmacy Students
Barclay, Sean M.; Stolte, Scott K.
2014-01-01
Objectives. To determine a measurable definition of academic entitlement, measure academic entitlement in graduating doctor of pharmacy (PharmD) students, and compare the academic performance between students identified as more or less academically entitled. Methods. Graduating students at a private health sciences institution were asked to complete an electronic survey instrument that included demographic data, academic performance, and 2 validated academic entitlement instruments. Results. One hundred forty-one of 243 students completed the survey instrument. Fourteen (10%) students scored greater than the median total points possible on 1 or both of the academic entitlement instruments and were categorized as more academically entitled. Less academically entitled students required fewer reassessments and less remediation than more academically entitled students. The highest scoring academic entitlement items related to student perception of what professors should do for them. Conclusion. Graduating pharmacy students with lower levels of academic entitlement were more academically successful than more academically entitled students. Moving from an expert opinion approach to evidence-based decision-making in the area of academic entitlement will allow pharmacy educators to identify interventions that will decrease academic entitlement and increase academic success in pharmacy students. PMID:25147388
48 CFR 1852.213-70 - Offeror Representations and Certifications-Other Than Commercial Items.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: “Limited rights data” and “Restricted computer software” are defined in the contract clause entitled...) Service-disabled veteran means a veteran, as defined in 38 U.S.C. 101(2), with a disability that is service-connected, as defined in 38 U.S.C. 101(16). “Small business concern” means a concern, including...
48 CFR 1852.213-70 - Offeror Representations and Certifications-Other Than Commercial Items.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: “Limited rights data” and “Restricted computer software” are defined in the contract clause entitled...) Service-disabled veteran means a veteran, as defined in 38 U.S.C. 101(2), with a disability that is service-connected, as defined in 38 U.S.C. 101(16). “Small business concern” means a concern, including...
48 CFR 1852.213-70 - Offeror Representations and Certifications-Other Than Commercial Items.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: “Limited rights data” and “Restricted computer software” are defined in the contract clause entitled...) Service-disabled veteran means a veteran, as defined in 38 U.S.C. 101(2), with a disability that is service-connected, as defined in 38 U.S.C. 101(16). “Small business concern” means a concern, including...
48 CFR 1852.213-70 - Offeror Representations and Certifications-Other Than Commercial Items.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: “Limited rights data” and “Restricted computer software” are defined in the contract clause entitled...) Service-disabled veteran means a veteran, as defined in 38 U.S.C. 101(2), with a disability that is service-connected, as defined in 38 U.S.C. 101(16). “Small business concern” means a concern, including...
Code of Federal Regulations, 2010 CFR
2010-04-01
... pension based on your noncovered employment. 404.213 Section 404.213 Employees' Benefits SOCIAL SECURITY... you are eligible for a pension based on your noncovered employment. (a) When applicable. Except as... entitled to a monthly pension(s) for which you first became eligible after 1985 based in whole or part on...
Fifth SIAM conference on geometric design 97: Final program and abstracts. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
The meeting was divided into the following sessions: (1) CAD/CAM; (2) Curve/Surface Design; (3) Geometric Algorithms; (4) Multiresolution Methods; (5) Robotics; (6) Solid Modeling; and (7) Visualization. This report contains the abstracts of papers presented at the meeting. Proceding the conference there was a short course entitled ``Wavelets for Geometric Modeling and Computer Graphics``.
Brambila-Tapia, Aniel Jessica Leticia; Perez-Rueda, Ernesto; Barrios, Humberto; Dávalos-Rodríguez, Nory Omayra; Dávalos-Rodríguez, Ingrid Patricia; Cardona-Muñoz, Ernesto Germán; Salazar-Páramo, Mario
2017-08-01
A systematic analysis of beta-lactamases based on comparative proteomics has not been performed thus far. In this report, we searched for the presence of beta-lactam-related proteins in 591 bacterial proteomes belonging to 52 species that are pathogenic to humans. The amino acid sequences for 19 different types of beta-lactamases (ACT, CARB, CifA, CMY, CTX, FOX, GES, GOB, IMP, IND, KPC, LEN, OKP, OXA, OXY, SHV, TEM, NDM, and VIM) were obtained from the ARG-ANNOT database and were used to construct 19 HMM profiles, which were used to identify potential beta-lactamases in the completely sequenced bacterial proteomes. A total of 2877 matches that included the word "beta-lactamase" and/or "penicillin" in the functional annotation and/or in any of its regions were obtained. These enzymes were mainly described as "penicillin-binding proteins," "beta-lactamases," and "metallo-beta-lactamases" and were observed in 47 of the 52 species studied. In addition, proteins classified as "beta-lactamases" were observed in 39 of the species included. A positive correlation between the number of beta-lactam-related proteins per species and the proteome size was observed (R 0.78, P < 0.00001). This correlation partially explains the high presence of beta-lactam-related proteins in large proteomes, such as Nocardia brasiliensis, Bacillus anthracis, and Mycobacterium tuberculosis, along with their absence in small proteomes, such as Chlamydia spp. and Mycoplasma spp. We detected only five types of beta-lactamases (TEM, SHV, CTX, IMP, and OXA) and other related proteins in particular species that corresponded with those reported in the literature. We additionally detected other potential species-specific beta-lactamases that have not yet been reported. In the future, better results will be achieved due to more accurate sequence annotations and a greater number of sequenced genomes.
Crysalis: an integrated server for computational analysis and design of protein crystallization.
Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I; Lin, Donghai; Song, Jiangning
2016-02-24
The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/.
Crysalis: an integrated server for computational analysis and design of protein crystallization
Wang, Huilin; Feng, Liubin; Zhang, Ziding; Webb, Geoffrey I.; Lin, Donghai; Song, Jiangning
2016-01-01
The failure of multi-step experimental procedures to yield diffraction-quality crystals is a major bottleneck in protein structure determination. Accordingly, several bioinformatics methods have been successfully developed and employed to select crystallizable proteins. Unfortunately, the majority of existing in silico methods only allow the prediction of crystallization propensity, seldom enabling computational design of protein mutants that can be targeted for enhancing protein crystallizability. Here, we present Crysalis, an integrated crystallization analysis tool that builds on support-vector regression (SVR) models to facilitate computational protein crystallization prediction, analysis, and design. More specifically, the functionality of this new tool includes: (1) rapid selection of target crystallizable proteins at the proteome level, (2) identification of site non-optimality for protein crystallization and systematic analysis of all potential single-point mutations that might enhance protein crystallization propensity, and (3) annotation of target protein based on predicted structural properties. We applied the design mode of Crysalis to identify site non-optimality for protein crystallization on a proteome-scale, focusing on proteins currently classified as non-crystallizable. Our results revealed that site non-optimality is based on biases related to residues, predicted structures, physicochemical properties, and sequence loci, which provides in-depth understanding of the features influencing protein crystallization. Crysalis is freely available at http://nmrcen.xmu.edu.cn/crysalis/. PMID:26906024
Comparative Proteome Analysis of Brucella melitensis Vaccine Strain Rev 1 and a Virulent Strain, 16M
Eschenbrenner, Michel; Wagner, Mary Ann; Horn, Troy A.; Kraycer, Jo Ann; Mujer, Cesar V.; Hagius, Sue; Elzer, Philip; DelVecchio, Vito G.
2002-01-01
The genus Brucella consists of bacterial pathogens that cause brucellosis, a major zoonotic disease characterized by undulant fever and neurological disorders in humans. Among the different Brucella species, Brucella melitensis is considered the most virulent. Despite successful use in animals, the vaccine strains remain infectious for humans. To understand the mechanism of virulence in B. melitensis, the proteome of vaccine strain Rev 1 was analyzed by two-dimensional gel electrophoresis and compared to that of virulent strain 16M. The two strains were grown under identical laboratory conditions. Computer-assisted analysis of the two B. melitensis proteomes revealed proteins expressed in either 16M or Rev 1, as well as up- or down-regulation of proteins specific for each of these strains. These proteins were identified by peptide mass fingerprinting. It was found that certain metabolic pathways may be deregulated in Rev 1. Expression of an immunogenic 31-kDa outer membrane protein, proteins utilized for iron acquisition, and those that play a role in sugar binding, lipid degradation, and amino acid binding was altered in Rev 1. PMID:12193611
Eschenbrenner, Michel; Wagner, Mary Ann; Horn, Troy A; Kraycer, Jo Ann; Mujer, Cesar V; Hagius, Sue; Elzer, Philip; DelVecchio, Vito G
2002-09-01
The genus Brucella consists of bacterial pathogens that cause brucellosis, a major zoonotic disease characterized by undulant fever and neurological disorders in humans. Among the different Brucella species, Brucella melitensis is considered the most virulent. Despite successful use in animals, the vaccine strains remain infectious for humans. To understand the mechanism of virulence in B. melitensis, the proteome of vaccine strain Rev 1 was analyzed by two-dimensional gel electrophoresis and compared to that of virulent strain 16M. The two strains were grown under identical laboratory conditions. Computer-assisted analysis of the two B. melitensis proteomes revealed proteins expressed in either 16M or Rev 1, as well as up- or down-regulation of proteins specific for each of these strains. These proteins were identified by peptide mass fingerprinting. It was found that certain metabolic pathways may be deregulated in Rev 1. Expression of an immunogenic 31-kDa outer membrane protein, proteins utilized for iron acquisition, and those that play a role in sugar binding, lipid degradation, and amino acid binding was altered in Rev 1.
Havugimana, Pierre C; Hu, Pingzhao; Emili, Andrew
2017-10-01
Elucidation of the networks of physical (functional) interactions present in cells and tissues is fundamental for understanding the molecular organization of biological systems, the mechanistic basis of essential and disease-related processes, and for functional annotation of previously uncharacterized proteins (via guilt-by-association or -correlation). After a decade in the field, we felt it timely to document our own experiences in the systematic analysis of protein interaction networks. Areas covered: Researchers worldwide have contributed innovative experimental and computational approaches that have driven the rapidly evolving field of 'functional proteomics'. These include mass spectrometry-based methods to characterize macromolecular complexes on a global-scale and sophisticated data analysis tools - most notably machine learning - that allow for the generation of high-quality protein association maps. Expert commentary: Here, we recount some key lessons learned, with an emphasis on successful workflows, and challenges, arising from our own and other groups' ongoing efforts to generate, interpret and report proteome-scale interaction networks in increasingly diverse biological contexts.
NASA Astrophysics Data System (ADS)
The, Matthew; MacCoss, Michael J.; Noble, William S.; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method—grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein—in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license.
The, Matthew; MacCoss, Michael J; Noble, William S; Käll, Lukas
2016-11-01
Percolator is a widely used software tool that increases yield in shotgun proteomics experiments and assigns reliable statistical confidence measures, such as q values and posterior error probabilities, to peptides and peptide-spectrum matches (PSMs) from such experiments. Percolator's processing speed has been sufficient for typical data sets consisting of hundreds of thousands of PSMs. With our new scalable approach, we can now also analyze millions of PSMs in a matter of minutes on a commodity computer. Furthermore, with the increasing awareness for the need for reliable statistics on the protein level, we compared several easy-to-understand protein inference methods and implemented the best-performing method-grouping proteins by their corresponding sets of theoretical peptides and then considering only the best-scoring peptide for each protein-in the Percolator package. We used Percolator 3.0 to analyze the data from a recent study of the draft human proteome containing 25 million spectra (PM:24870542). The source code and Ubuntu, Windows, MacOS, and Fedora binary packages are available from http://percolator.ms/ under an Apache 2.0 license. Graphical Abstract ᅟ.
Proteomic and Biochemical Analyses of the Cotyledon and Root of Flooding-Stressed Soybean Plants
Komatsu, Setsuko; Makino, Takahiro; Yasue, Hiroshi
2013-01-01
Background Flooding significantly reduces the growth and grain yield of soybean plants. Proteomic and biochemical techniques were used to determine whether the function of cotyledon and root is altered in soybean under flooding stress. Results Two-day-old soybean plants were flooded for 2 days, after which the proteins from root and cotyledon were extracted for proteomic analysis. In response to flooding stress, the abundance of 73 and 28 proteins was significantly altered in the root and cotyledon, respectively. The accumulation of only one protein, 70 kDa heat shock protein (HSP70) (Glyma17g08020.1), increased in both organs following flooding. The ratio of protein abundance of HSP70 and biophoton emission in the cotyledon was higher than those detected in the root under flooding stress. Computed tomography and elemental analyses revealed that flooding stress decreases the number of calcium oxalate crystal the cotyledon, indicating calcium ion was elevated in the cotyledon under flooding stress. Conclusion These results suggest that calcium might play one role through HSP70 in the cotyledon under flooding stress. PMID:23799004
HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.
Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J
2016-06-03
Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .
Image analysis tools and emerging algorithms for expression proteomics
English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.
2012-01-01
Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614
ERIC Educational Resources Information Center
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
2014-01-01
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo
2018-03-15
RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId's core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goal here. We have constructed a graphical user interface to facilitate the use of RAId on users' local machines. Written in Java, RAId_GUI not only makes easy executions of RAId but also provides tools for data/spectra visualization, MS-product analysis, molecular isotopic distribution analysis, and graphing the retrieval versus the proportion of false discoveries. The results viewer displays and allows the users to download the analyses results. Both the knowledge-integrated organismal databases and the code package (containing source code, the graphical user interface, and a user manual) are available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/raid.html .
Biochemical systems approaches for the analysis of histone modification readout.
Soldi, Monica; Bremang, Michael; Bonaldi, Tiziana
2014-08-01
Chromatin is the macromolecular nucleoprotein complex that governs the organization of genetic material in the nucleus of eukaryotic cells. In chromatin, DNA is packed with histone proteins into nucleosomes. Core histones are prototypes of hyper-modified proteins, being decorated by a large number of site-specific reversible and irreversible post-translational modifications (PTMs), which contribute to the maintenance and modulation of chromatin plasticity, gene activation, and a variety of other biological processes and disease states. The observations of the variety, frequency and co-occurrence of histone modifications in distinct patterns at specific genomic loci have led to the idea that hPTMs can create a molecular barcode, read by effector proteins that translate it into a specific transcriptional state, or process, on the underlying DNA. However, despite the fact that this histone-code hypothesis was proposed more than 10 years ago, the molecular details of its working mechanisms are only partially characterized. In particular, two questions deserve specific investigation: how the different modifications associate and synergize into patterns and how these PTM configurations are read and translated by multi-protein complexes into a specific functional outcome on the genome. Mass spectrometry (MS) has emerged as a versatile tool to investigate chromatin biology, useful for both identifying and validating hPTMs, and to dissect the molecular determinants of histone modification readout systems. We review here the MS techniques and the proteomics methods that have been developed to address these fundamental questions in epigenetics research, emphasizing approaches based on the proteomic dissection of distinct native chromatin regions, with a critical evaluation of their present challenges and future potential. This article is part of a Special Issue entitled: Molecular mechanisms of histone modification function. Copyright © 2014 Elsevier B.V. All rights reserved.
Standardization approaches in absolute quantitative proteomics with mass spectrometry.
Calderón-Celis, Francisco; Encinar, Jorge Ruiz; Sanz-Medel, Alfredo
2017-07-31
Mass spectrometry-based approaches have enabled important breakthroughs in quantitative proteomics in the last decades. This development is reflected in the better quantitative assessment of protein levels as well as to understand post-translational modifications and protein complexes and networks. Nowadays, the focus of quantitative proteomics shifted from the relative determination of proteins (ie, differential expression between two or more cellular states) to absolute quantity determination, required for a more-thorough characterization of biological models and comprehension of the proteome dynamism, as well as for the search and validation of novel protein biomarkers. However, the physico-chemical environment of the analyte species affects strongly the ionization efficiency in most mass spectrometry (MS) types, which thereby require the use of specially designed standardization approaches to provide absolute quantifications. Most common of such approaches nowadays include (i) the use of stable isotope-labeled peptide standards, isotopologues to the target proteotypic peptides expected after tryptic digestion of the target protein; (ii) use of stable isotope-labeled protein standards to compensate for sample preparation, sample loss, and proteolysis steps; (iii) isobaric reagents, which after fragmentation in the MS/MS analysis provide a final detectable mass shift, can be used to tag both analyte and standard samples; (iv) label-free approaches in which the absolute quantitative data are not obtained through the use of any kind of labeling, but from computational normalization of the raw data and adequate standards; (v) elemental mass spectrometry-based workflows able to provide directly absolute quantification of peptides/proteins that contain an ICP-detectable element. A critical insight from the Analytical Chemistry perspective of the different standardization approaches and their combinations used so far for absolute quantitative MS-based (molecular and elemental) proteomics is provided in this review. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Illinois State Board of Higher Education, Springfield.
This proposal calls on the state of Illinois to initiate a statewide computing and telecommunications network that would give its residents access to higher education, advanced training, and electronic information resources. The proposed network, entitled Illinois Century Network, would link all higher education institutions in the state to…
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1993-01-01
This progress report describes accomplishments of four programs. The four programs are entitled (1) Faster,simpler processing of positron-computing precursors: New physicochemical approaches, (2) Novel solid phase reagents and methods to improve radiosynthesis and isotope production, (3) Quantitative evaluation of the extraction of information from PET images, and (4) Optimization of tracer kinetic methods for radioligand studies in PET.
ERIC Educational Resources Information Center
Stewart, Phillip Michael, Jr.
2013-01-01
Games in science education is emerging as a popular topic of scholarly inquiry. The National Research Council recently published a report detailing a research agenda for games and science education entitled "Learning Science Through Computer Games and Simulations" (2011). The report recommends moving beyond typical proof-of-concept…
20 CFR 217.8 - When one application satisfies the filing requirement for other benefits.
Code of Federal Regulations, 2012 CFR
2012-04-01
... employee died. (g) A child's annuity or child's full-time student annuity if the child of the employee was.... (d) A widow(er)'s annuity if the widow(er) was entitled to a spouse annuity in the month before the month the employee died. (e) A widow(er)'s annuity if the widow(er) was included in the computation of...
20 CFR 217.8 - When one application satisfies the filing requirement for other benefits.
Code of Federal Regulations, 2011 CFR
2011-04-01
... employee died. (g) A child's annuity or child's full-time student annuity if the child of the employee was.... (d) A widow(er)'s annuity if the widow(er) was entitled to a spouse annuity in the month before the month the employee died. (e) A widow(er)'s annuity if the widow(er) was included in the computation of...
20 CFR 217.8 - When one application satisfies the filing requirement for other benefits.
Code of Federal Regulations, 2010 CFR
2010-04-01
... employee died. (g) A child's annuity or child's full-time student annuity if the child of the employee was.... (d) A widow(er)'s annuity if the widow(er) was entitled to a spouse annuity in the month before the month the employee died. (e) A widow(er)'s annuity if the widow(er) was included in the computation of...
20 CFR 217.8 - When one application satisfies the filing requirement for other benefits.
Code of Federal Regulations, 2014 CFR
2014-04-01
... employee died. (g) A child's annuity or child's full-time student annuity if the child of the employee was.... (d) A widow(er)'s annuity if the widow(er) was entitled to a spouse annuity in the month before the month the employee died. (e) A widow(er)'s annuity if the widow(er) was included in the computation of...
20 CFR 217.8 - When one application satisfies the filing requirement for other benefits.
Code of Federal Regulations, 2013 CFR
2013-04-01
... employee died. (g) A child's annuity or child's full-time student annuity if the child of the employee was.... (d) A widow(er)'s annuity if the widow(er) was entitled to a spouse annuity in the month before the month the employee died. (e) A widow(er)'s annuity if the widow(er) was included in the computation of...
New intracellular activities of matrix metalloproteinases shine in the moonlight.
Jobin, Parker G; Butler, Georgina S; Overall, Christopher M
2017-11-01
Adaption of a single protein to perform multiple independent functions facilitates functional plasticity of the proteome allowing a limited number of protein-coding genes to perform a multitude of cellular processes. Multifunctionality is achievable by post-translational modifications and by modulating subcellular localization. Matrix metalloproteinases (MMPs), classically viewed as degraders of the extracellular matrix (ECM) responsible for matrix protein turnover, are more recently recognized as regulators of a range of extracellular bioactive molecules including chemokines, cytokines, and their binders. However, growing evidence has convincingly identified select MMPs in intracellular compartments with unexpected physiological and pathological roles. Intracellular MMPs have both proteolytic and non-proteolytic functions, including signal transduction and transcription factor activity thereby challenging their traditional designation as extracellular proteases. This review highlights current knowledge of subcellular location and activity of these "moonlighting" MMPs. Intracellular roles herald a new era of MMP research, rejuvenating interest in targeting these proteases in therapeutic strategies. This article is part of a Special Issue entitled: Matrix Metalloproteinases edited by Rafael Fridman. Copyright © 2017 Elsevier B.V. All rights reserved.
The New Microbiology: a conference at the Institut de France.
Radoshevich, Lilliana; Bierne, Hélène; Ribet, David; Cossart, Pascale
2012-08-01
In May 2012, three European Academies held a conference on the present and future of microbiology. The conference, entitled "The New Microbiology", was a joint effort of the French Académie des sciences, of the German National Academy of Sciences Leopoldina and of the British Royal Society. The organizers - Pascale Cossart and Philippe Sansonetti from the "Académie des sciences", David Holden and Richard Moxon from the "Royal Society", and Jörg Hacker and Jürgen Hesseman from the "Leopoldina Nationale Akademie der Wissenschaften" - wanted to highlight the current renaissance in the field of microbiology mostly due to the advent of technological developments and allowing for single-cell analysis, rapid and inexpensive genome-wide comparisons, sophisticated microscopy and quantitative large-scale studies of RNA regulation and proteomics. The conference took place in the historical Palais de l'Institut de France in Paris with the strong support of Jean-François Bach, Secrétaire Perpétuel of the Académie des sciences. Copyright © 2012 Académie des sciences. Published by Elsevier SAS. All rights reserved.
Genome and proteome annotation: organization, interpretation and integration
Reeves, Gabrielle A.; Talavera, David; Thornton, Janet M.
2008-01-01
Recent years have seen a huge increase in the generation of genomic and proteomic data. This has been due to improvements in current biological methodologies, the development of new experimental techniques and the use of computers as support tools. All these raw data are useless if they cannot be properly analysed, annotated, stored and displayed. Consequently, a vast number of resources have been created to present the data to the wider community. Annotation tools and databases provide the means to disseminate these data and to comprehend their biological importance. This review examines the various aspects of annotation: type, methodology and availability. Moreover, it puts a special interest on novel annotation fields, such as that of phenotypes, and highlights the recent efforts focused on the integrating annotations. PMID:19019817
Cloud Computing and Its Applications in GIS
NASA Astrophysics Data System (ADS)
Kang, Cao
2011-12-01
Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)
38 CFR 21.9570 - Transfer of entitlement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Transfer of entitlement... (CONTINUED) VOCATIONAL REHABILITATION AND EDUCATION Post-9/11 GI Bill Transfer of Entitlement to Basic Educational Assistance to Dependents § 21.9570 Transfer of entitlement. An individual entitled to educational...
38 CFR 21.9570 - Transfer of entitlement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2013-07-01 2013-07-01 false Transfer of entitlement... (CONTINUED) VOCATIONAL REHABILITATION AND EDUCATION Post-9/11 GI Bill Transfer of Entitlement to Basic Educational Assistance to Dependents § 21.9570 Transfer of entitlement. An individual entitled to educational...
38 CFR 21.9570 - Transfer of entitlement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Transfer of entitlement... (CONTINUED) VOCATIONAL REHABILITATION AND EDUCATION Post-9/11 GI Bill Transfer of Entitlement to Basic Educational Assistance to Dependents § 21.9570 Transfer of entitlement. An individual entitled to educational...
38 CFR 21.9570 - Transfer of entitlement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Transfer of entitlement... (CONTINUED) VOCATIONAL REHABILITATION AND EDUCATION Post-9/11 GI Bill Transfer of Entitlement to Basic Educational Assistance to Dependents § 21.9570 Transfer of entitlement. An individual entitled to educational...
A Typology of Students Based on Academic Entitlement
ERIC Educational Resources Information Center
Luckett, Michael; Trocchia, Philip J.; Noel, Noel Mark; Marlin, Dan
2017-01-01
Two hundred ninety-three university business students were surveyed using an academic entitlement (AE) scale updated to include new technologies. Using factor analysis, three components of AE were identified: grade entitlement, behavioral entitlement, and service entitlement. A k-means clustering procedure was then applied to identify four groups…
Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures
Manolakos, Elias S.
2015-01-01
Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332
Meher, Prabina K.; Sahu, Tanmaya K.; Gahoi, Shachi; Rao, Atmakuri R.
2018-01-01
Heat shock proteins (HSPs) play a pivotal role in cell growth and variability. Since conventional approaches are expensive and voluminous protein sequence information is available in the post-genomic era, development of an automated and accurate computational tool is highly desirable for prediction of HSPs, their families and sub-types. Thus, we propose a computational approach for reliable prediction of all these components in a single framework and with higher accuracy as well. The proposed approach achieved an overall accuracy of ~84% in predicting HSPs, ~97% in predicting six different families of HSPs, and ~94% in predicting four types of DnaJ proteins, with bench mark datasets. The developed approach also achieved higher accuracy as compared to most of the existing approaches. For easy prediction of HSPs by experimental scientists, a user friendly web server ir-HSP is made freely accessible at http://cabgrid.res.in:8080/ir-hsp. The ir-HSP was further evaluated for proteome-wide identification of HSPs by using proteome datasets of eight different species, and ~50% of the predicted HSPs in each species were found to be annotated with InterPro HSP families/domains. Thus, the developed computational method is expected to supplement the currently available approaches for prediction of HSPs, to the extent of their families and sub-types. PMID:29379521
Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.
Sharma, Anuj; Manolakos, Elias S
2015-01-01
Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1993-06-01
This progress report describes accomplishments of four programs. The four programs are entitled (1) Faster,simpler processing of positron-computing precursors: New physicochemical approaches, (2) Novel solid phase reagents and methods to improve radiosynthesis and isotope production, (3) Quantitative evaluation of the extraction of information from PET images, and (4) Optimization of tracer kinetic methods for radioligand studies in PET.
A Contract Management Guide for Air Force Environmental Restoration
1991-09-01
literature in their area of interest in an attempt to locate a market niche. These topical studies often take the form of guides to specific areas of the...computer remote bulletin board system entitled the Hazardous Materials Information Exchange (HMIX). HMIX has information on training for response to... market for two reasons: the size of the appropriations under the Superfund Amendment and Reauthorization Act, and the huge number of contaminated
User Interface on the World Wide Web: How to Implement a Multi-Level Program Online
NASA Technical Reports Server (NTRS)
Cranford, Jonathan W.
1995-01-01
The objective of this Langley Aerospace Research Summer Scholars (LARSS) research project was to write a user interface that utilizes current World Wide Web (WWW) technologies for an existing computer program written in C, entitled LaRCRisk. The project entailed researching data presentation and script execution on the WWW and than writing input/output procedures for the database management portion of LaRCRisk.
20 CFR 410.215 - Duration of entitlement; parent, brother, or sister.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Duration of entitlement; parent, brother, or...; Duration of Entitlement; Filing of Claims and Evidence § 410.215 Duration of entitlement; parent, brother, or sister. (a) parent, brother, or sister is entitled to benefits beginning with the month all the...
20 CFR 410.214 - Conditions of entitlement; parent, brother, or sister.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Conditions of entitlement; parent, brother...; Duration of Entitlement; Filing of Claims and Evidence § 410.214 Conditions of entitlement; parent, brother, or sister. An individual is entitled to benefits if: (a) Such individual: (1) Is the parent, brother...
Stable isotope labelling methods in mass spectrometry-based quantitative proteomics.
Chahrour, Osama; Cobice, Diego; Malone, John
2015-09-10
Mass-spectrometry based proteomics has evolved as a promising technology over the last decade and is undergoing a dramatic development in a number of different areas, such as; mass spectrometric instrumentation, peptide identification algorithms and bioinformatic computational data analysis. The improved methodology allows quantitative measurement of relative or absolute protein amounts, which is essential for gaining insights into their functions and dynamics in biological systems. Several different strategies involving stable isotopes label (ICAT, ICPL, IDBEST, iTRAQ, TMT, IPTL, SILAC), label-free statistical assessment approaches (MRM, SWATH) and absolute quantification methods (AQUA) are possible, each having specific strengths and weaknesses. Inductively coupled plasma mass spectrometry (ICP-MS), which is still widely recognised as elemental detector, has recently emerged as a complementary technique to the previous methods. The new application area for ICP-MS is targeting the fast growing field of proteomics related research, allowing absolute protein quantification using suitable elemental based tags. This document describes the different stable isotope labelling methods which incorporate metabolic labelling in live cells, ICP-MS based detection and post-harvest chemical label tagging for protein quantification, in addition to summarising their pros and cons. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Roslyn N.; Sanford, James A.; Park, Jea H.
Towards developing a systems-level pathobiological understanding of Salmonella enterica, we performed a subcellular proteomic analysis of this pathogen grown under standard laboratory and infection-mimicking conditions in vitro. Analysis of proteins from cytoplasmic, inner membrane, periplasmic, and outer membrane fractions yielded coverage of over 30% of the theoretical proteome. Confident subcellular location could be assigned to over 1000 proteins, with good agreement between experimentally observed location and predicted/known protein properties. Comparison of protein location under the different environmental conditions provided insight into dynamic protein localization and possible moonlighting (multiple function) activities. Notable examples of dynamic localization were the response regulators ofmore » two-component regulatory systems (e.g., ArcB, PhoQ). The DNA-binding protein Dps that is generally regarded as cytoplasmic was significantly enriched in the outer membrane for all growth conditions examined, suggestive of moonlighting activities. These observations imply the existence of unknown transport mechanisms and novel functions for a subset of Salmonella proteins. Overall, this work provides a catalog of experimentally verified subcellular protein location for Salmonella and a framework for further investigations using computational modeling.« less
Jing, Li; Amster, I Jonathan
2009-10-15
Offline high performance liquid chromatography combined with matrix assisted laser desorption and Fourier transform ion cyclotron resonance mass spectrometry (HPLC-MALDI-FTICR/MS) provides the means to rapidly analyze complex mixtures of peptides, such as those produced by proteolytic digestion of a proteome. This method is particularly useful for making quantitative measurements of changes in protein expression by using (15)N-metabolic labeling. Proteolytic digestion of combined labeled and unlabeled proteomes produces complex mixtures that with many mass overlaps when analyzed by HPLC-MALDI-FTICR/MS. A significant challenge to data analysis is the matching of pairs of peaks which represent an unlabeled peptide and its labeled counterpart. We have developed an algorithm and incorporated it into a compute program which significantly accelerates the interpretation of (15)N metabolic labeling data by automating the process of identifying unlabeled/labeled peak pairs. The algorithm takes advantage of the high resolution and mass accuracy of FTICR mass spectrometry. The algorithm is shown to be able to successfully identify the (15)N/(14)N peptide pairs and calculate peptide relative abundance ratios in highly complex mixtures from the proteolytic digest of a whole organism protein extract.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Individual under age 65 who is entitled to social... is entitled to social security or railroad retirement disability benefits. (a) Basic requirements. An...) Entitled or deemed entitled to social security disability benefits as an insured individual, child, widow...
ERIC Educational Resources Information Center
Peirone, Amy; Maticka-Tyndale, Eleanor
2017-01-01
Academic entitlement, a term that defines students' expectations of academic success independent of performance, has been linked with a number of maladaptive behaviors. This study examined the potential relationship between academic entitlement and prospective workplace entitlement in a sample of Canadian students (N=1024) using an online survey.…
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Individual under age 65 who is entitled to social... is entitled to social security or railroad retirement disability benefits. (a) Basic requirements. An...) Entitled or deemed entitled to social security disability benefits as an insured individual, child, widow...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Individual under age 65 who is entitled to social... is entitled to social security or railroad retirement disability benefits. (a) Basic requirements. An...) Entitled or deemed entitled to social security disability benefits as an insured individual, child, widow...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Individual under age 65 who is entitled to social... is entitled to social security or railroad retirement disability benefits. (a) Basic requirements. An...) Entitled or deemed entitled to social security disability benefits as an insured individual, child, widow...
The Air Force In Silico -- Computational Biology in 2025
2007-11-01
and chromosome) these new fields are commonly referred to as “~omics.” Proteomics , transcriptomics, metabolomics , epigenomics, physiomics... Bioinformatics , 2006, http://journal.imbio.de/ http://www-bm.ipk-gatersleben.de/stable/php/ journal /articles/pdf/jib-22.pdf (accessed 30 September...Chirino, G. Tansley and I. Dryden, “The implications for Bioinformatics of integration across physical scales,” Journal of Integrative Bioinformatics
Molecules to maps: tools for visualization and interaction in support of computational biology.
Kraemer, E T; Ferrin, T E
1998-01-01
The volume of data produced by genome projects, X-ray crystallography, NMR spectroscopy, and electron and confocal microscopy present the bioinformatics community with new challenges for analyzing, understanding, and exchanging this data. At the 1998 Pacific Symposium on Biocomputing, a track entitled 'Molecules to Maps: Tools for Visualization and Interaction in Computational Biology' provided tool developers and users with the opportunity to discuss advances in tools and techniques to assist scientists in evaluating, absorbing, navigating, and correlating this sea of information, through visualization and user interaction. In this paper we present these advances and discuss some of the challenges that remain to be solved.
A Computational Tool to Detect and Avoid Redundancy in Selected Reaction Monitoring
Röst, Hannes; Malmström, Lars; Aebersold, Ruedi
2012-01-01
Selected reaction monitoring (SRM), also called multiple reaction monitoring, has become an invaluable tool for targeted quantitative proteomic analyses, but its application can be compromised by nonoptimal selection of transitions. In particular, complex backgrounds may cause ambiguities in SRM measurement results because peptides with interfering transitions similar to those of the target peptide may be present in the sample. Here, we developed a computer program, the SRMCollider, that calculates nonredundant theoretical SRM assays, also known as unique ion signatures (UIS), for a given proteomic background. We show theoretically that UIS of three transitions suffice to conclusively identify 90% of all yeast peptides and 85% of all human peptides. Using predicted retention times, the SRMCollider also simulates time-scheduled SRM acquisition, which reduces the number of interferences to consider and leads to fewer transitions necessary to construct an assay. By integrating experimental fragment ion intensities from large scale proteome synthesis efforts (SRMAtlas) with the information content-based UIS, we combine two orthogonal approaches to create high quality SRM assays ready to be deployed. We provide a user friendly, open source implementation of an algorithm to calculate UIS of any order that can be accessed online at http://www.srmcollider.org to find interfering transitions. Finally, our tool can also simulate the specificity of novel data-independent MS acquisition methods in Q1–Q3 space. This allows us to predict parameters for these methods that deliver a specificity comparable with that of SRM. Using SRM interference information in addition to other sources of information can increase the confidence in an SRM measurement. We expect that the consideration of information content will become a standard step in SRM assay design and analysis, facilitated by the SRMCollider. PMID:22535207
Graumann, Johannes; Scheltema, Richard A; Zhang, Yong; Cox, Jürgen; Mann, Matthias
2012-03-01
In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides "on-the-fly" within 30 ms, well within the time constraints of a shotgun fragmentation "topN" method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available.
Expert system for computer-assisted annotation of MS/MS spectra.
Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias
2012-11-01
An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions.
Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei
2017-12-21
In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets.
Expert System for Computer-assisted Annotation of MS/MS Spectra*
Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias
2012-01-01
An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions. PMID:22888147
Graumann, Johannes; Scheltema, Richard A.; Zhang, Yong; Cox, Jürgen; Mann, Matthias
2012-01-01
In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides “on-the-fly” within 30 ms, well within the time constraints of a shotgun fragmentation “topN” method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available. PMID:22171319
Integrating Omics Technologies to Study Pulmonary Physiology and Pathology at the Systems Level
Pathak, Ravi Ramesh; Davé, Vrushank
2014-01-01
Assimilation and integration of “omics” technologies, including genomics, epigenomics, proteomics, and metabolomics has readily altered the landscape of medical research in the last decade. The vast and complex nature of omics data can only be interpreted by linking molecular information at the organismic level, forming the foundation of systems biology. Research in pulmonary biology/medicine has necessitated integration of omics, network, systems and computational biology data to differentially diagnose, interpret, and prognosticate pulmonary diseases, facilitating improvement in therapy and treatment modalities. This review describes how to leverage this emerging technology in understanding pulmonary diseases at the systems level –called a “systomic” approach. Considering the operational wholeness of cellular and organ systems, diseased genome, proteome, and the metabolome needs to be conceptualized at the systems level to understand disease pathogenesis and progression. Currently available omics technology and resources require a certain degree of training and proficiency in addition to dedicated hardware and applications, making them relatively less user friendly for the pulmonary biologist and clinicians. Herein, we discuss the various strategies, computational tools and approaches required to study pulmonary diseases at the systems level for biomedical scientists and clinical researchers. PMID:24802001
LXtoo: an integrated live Linux distribution for the bioinformatics community
2012-01-01
Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356
LXtoo: an integrated live Linux distribution for the bioinformatics community.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
2012-07-19
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
NASA Astrophysics Data System (ADS)
Gennett, Zachary Andrew
Millennial Generation students bring significant learning and teaching challenges to the classroom, because of their unique learning styles, breadth of interests related to social and environmental issues, and intimate experiences with technology. As a result, there has been an increased willingness at many universities to experiment with pedagogical strategies that depart from a traditional "learning by listening" model, and move toward more innovative methods involving active learning through computer games. In particular, current students typically express a strong interest in sustainability in which economic concerns must be weighed relative to environmental and social responsibilities. A game-based setting could prove very effective for fostering an operational understanding of these tradeoffs, and especially the social dimension which remains largely underdeveloped relative to the economic and environmental aspects. Through an examination of the educational potential of computer games, this study hypothesizes that to acquire the skills necessary to manage and understand the complexities of sustainability, Millennial Generation students must be engaged in active learning exercises that present dynamic problems and foster a high level of social interaction. This has led to the development of an educational computer game, entitled Shortfall, which simulates a business milieu for testing alternative paths regarding the principles of sustainability. This study examines the evolution of Shortfall from an educational board game that teaches the principles of environmentally benign manufacturing, to a completely networked computer game, entitled Shortfall Online that teaches the principles of sustainability. A capital-based theory of sustainability is adopted to more accurately convey the tradeoffs and opportunity costs among economic prosperity, environmental preservation, and societal responsibilities. While the economic and environmental aspects of sustainability have received considerable attention in traditional pedagogical approaches, specific focus is provided for the social dimension of sustainability, as it had remained largely underdeveloped. To measure social sustainability and provide students with an understanding of its significance, a prospective metric utilizing a social capital peer-evaluation survey, unique to Shortfall, is developed.
Stekhoven, Daniel J; Omasits, Ulrich; Quebatte, Maxime; Dehio, Christoph; Ahrens, Christian H
2014-03-17
Proteomics data provide unique insights into biological systems, including the predominant subcellular localization (SCL) of proteins, which can reveal important clues about their functions. Here we analyzed data of a complete prokaryotic proteome expressed under two conditions mimicking interaction of the emerging pathogen Bartonella henselae with its mammalian host. Normalized spectral count data from cytoplasmic, total membrane, inner and outer membrane fractions allowed us to identify the predominant SCL for 82% of the identified proteins. The spectral count proportion of total membrane versus cytoplasmic fractions indicated the propensity of cytoplasmic proteins to co-fractionate with the inner membrane, and enabled us to distinguish cytoplasmic, peripheral inner membrane and bona fide inner membrane proteins. Principal component analysis and k-nearest neighbor classification training on selected marker proteins or predominantly localized proteins, allowed us to determine an extensive catalog of at least 74 expressed outer membrane proteins, and to extend the SCL assignment to 94% of the identified proteins, including 18% where in silico methods gave no prediction. Suitable experimental proteomics data combined with straightforward computational approaches can thus identify the predominant SCL on a proteome-wide scale. Finally, we present a conceptual approach to identify proteins potentially changing their SCL in a condition-dependent fashion. The work presented here describes the first prokaryotic proteome-wide subcellular localization (SCL) dataset for the emerging pathogen B. henselae (Bhen). The study indicates that suitable subcellular fractionation experiments combined with straight-forward computational analysis approaches assessing the proportion of spectral counts observed in different subcellular fractions are powerful for determining the predominant SCL of a large percentage of the experimentally observed proteins. This includes numerous cases where in silico prediction methods do not provide any prediction. Avoiding a treatment with harsh conditions, cytoplasmic proteins tend to co-fractionate with proteins of the inner membrane fraction, indicative of close functional interactions. The spectral count proportion (SCP) of total membrane versus cytoplasmic fractions allowed us to obtain a good indication about the relative proximity of individual protein complex members to the inner membrane. Using principal component analysis and k-nearest neighbor approaches, we were able to extend the percentage of proteins with a predominant experimental localization to over 90% of all expressed proteins and identified a set of at least 74 outer membrane (OM) proteins. In general, OM proteins represent a rich source of candidates for the development of urgently needed new therapeutics in combat of resurgence of infectious disease and multi-drug resistant bacteria. Finally, by comparing the data from two infection biology relevant conditions, we conceptually explore methods to identify and visualize potential candidates that may partially change their SCL in these different conditions. The data are made available to researchers as a SCL compendium for Bhen and as an assistance in further improving in silico SCL prediction algorithms. Copyright © 2014 Elsevier B.V. All rights reserved.
Understanding Emergency Care Delivery Through Computer Simulation Modeling.
Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L
2018-02-01
In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.
Integrating cell biology and proteomic approaches in plants.
Takáč, Tomáš; Šamajová, Olga; Šamaj, Jozef
2017-10-03
Significant improvements of protein extraction, separation, mass spectrometry and bioinformatics nurtured advancements of proteomics during the past years. The usefulness of proteomics in the investigation of biological problems can be enhanced by integration with other experimental methods from cell biology, genetics, biochemistry, pharmacology, molecular biology and other omics approaches including transcriptomics and metabolomics. This review aims to summarize current trends integrating cell biology and proteomics in plant science. Cell biology approaches are most frequently used in proteomic studies investigating subcellular and developmental proteomes, however, they were also employed in proteomic studies exploring abiotic and biotic stress responses, vesicular transport, cytoskeleton and protein posttranslational modifications. They are used either for detailed cellular or ultrastructural characterization of the object subjected to proteomic study, validation of proteomic results or to expand proteomic data. In this respect, a broad spectrum of methods is employed to support proteomic studies including ultrastructural electron microscopy studies, histochemical staining, immunochemical localization, in vivo imaging of fluorescently tagged proteins and visualization of protein-protein interactions. Thus, cell biological observations on fixed or living cell compartments, cells, tissues and organs are feasible, and in some cases fundamental for the validation and complementation of proteomic data. Validation of proteomic data by independent experimental methods requires development of new complementary approaches. Benefits of cell biology methods and techniques are not sufficiently highlighted in current proteomic studies. This encouraged us to review most popular cell biology methods used in proteomic studies and to evaluate their relevance and potential for proteomic data validation and enrichment of purely proteomic analyses. We also provide examples of representative studies combining proteomic and cell biology methods for various purposes. Integrating cell biology approaches with proteomic ones allow validation and better interpretation of proteomic data. Moreover, cell biology methods remarkably extend the knowledge provided by proteomic studies and might be fundamental for the functional complementation of proteomic data. This review article summarizes current literature linking proteomics with cell biology. Copyright © 2017 Elsevier B.V. All rights reserved.
Yates, John R
2015-11-01
Advances in computer technology and software have driven developments in mass spectrometry over the last 50 years. Computers and software have been impactful in three areas: the automation of difficult calculations to aid interpretation, the collection of data and control of instruments, and data interpretation. As the power of computers has grown, so too has the utility and impact on mass spectrometers and their capabilities. This has been particularly evident in the use of tandem mass spectrometry data to search protein and nucleotide sequence databases to identify peptide and protein sequences. This capability has driven the development of many new approaches to study biological systems, including the use of "bottom-up shotgun proteomics" to directly analyze protein mixtures. Graphical Abstract ᅟ.
DMA Modern Programming Environment Study.
1980-01-01
capabilities. The centers are becoming increasingly dependent upon the computer and digital data in the fulfillment of MC&G goals. Successful application...ftticrcsrccessors C140 by Herbert AlteroDigital Citmmuncaticns C141 0 Structuredl Design ’-:orkshocr by Ned Chapin KC 156o Digital Systems En17lrceriirg CC 139 o3...on a programming environment. The study, which resulted in production of a paper entitled An EXEC 8 Programming Support Libary , contends that most of
Ethics across the computer science curriculum: privacy modules in an introductory database course.
Appel, Florence
2005-10-01
This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.
ERIC Educational Resources Information Center
General Services Administration, Washington, DC.
Summaries of the welcoming and opening remarks for a symposium on the standards issues that will affect the federal government's planning, acquisition, and use of integrated computer and telecommunications systems over the next five years set the stage for the keynote address by Joseph Timko of IBM entitled "Standards--Perspectives and Evolution."…
MIT Laboratory for Computer Science Progress Report 20 - July 1982 - Jun 1983,
1984-07-01
system by the Programming Technology Group. Research in the second and largest area entitled Machines, Languages , and Systems, strives to discover and...utilization and cost effectiveness . For example, the Programming Methodology Group and the Real Time Systems Group are developing languages and...100 Megabits per second when implemented with the 1.2[im. n- well cMOS process. 3. LANGUAGES 3.1. Demand Driven Evaluation In his engineer’s thesis
Modeling Ni-Cd performance. Planned alterations to the Goddard battery model
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1986-01-01
The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.
PNAC: a protein nucleolar association classifier
2011-01-01
Background Although primarily known as the site of ribosome subunit production, the nucleolus is involved in numerous and diverse cellular processes. Recent large-scale proteomics projects have identified thousands of human proteins that associate with the nucleolus. However, in most cases, we know neither the fraction of each protein pool that is nucleolus-associated nor whether their association is permanent or conditional. Results To describe the dynamic localisation of proteins in the nucleolus, we investigated the extent of nucleolar association of proteins by first collating an extensively curated literature-derived dataset. This dataset then served to train a probabilistic predictor which integrates gene and protein characteristics. Unlike most previous experimental and computational studies of the nucleolar proteome that produce large static lists of nucleolar proteins regardless of their extent of nucleolar association, our predictor models the fluidity of the nucleolus by considering different classes of nucleolar-associated proteins. The new method predicts all human proteins as either nucleolar-enriched, nucleolar-nucleoplasmic, nucleolar-cytoplasmic or non-nucleolar. Leave-one-out cross validation tests reveal sensitivity values for these four classes ranging from 0.72 to 0.90 and positive predictive values ranging from 0.63 to 0.94. The overall accuracy of the classifier was measured to be 0.85 on an independent literature-based test set and 0.74 using a large independent quantitative proteomics dataset. While the three nucleolar-association groups display vastly different Gene Ontology biological process signatures and evolutionary characteristics, they collectively represent the most well characterised nucleolar functions. Conclusions Our proteome-wide classification of nucleolar association provides a novel representation of the dynamic content of the nucleolus. This model of nucleolar localisation thus increases the coverage while providing accurate and specific annotations of the nucleolar proteome. It will be instrumental in better understanding the central role of the nucleolus in the cell and its interaction with other subcellular compartments. PMID:21272300
Completed | Office of Cancer Clinical Proteomics Research
Prior to the current Clinical Proteomic Tumor Analysis Consortium (CPTAC), previously funded initiatives associated with clinical proteomics research included: Clinical Proteomic Tumor Analysis Consortium (CPTAC 2.0) Clinical Proteomic Technologies for Cancer Initiative (CPTC) Mouse Proteomic Technologies Initiative
Li, Ginny X H; Vogel, Christine; Choi, Hyungwon
2018-06-07
While tandem mass spectrometry can detect post-translational modifications (PTM) at the proteome scale, reported PTM sites are often incomplete and include false positives. Computational approaches can complement these datasets by additional predictions, but most available tools use prediction models pre-trained for single PTM type by the developers and it remains a difficult task to perform large-scale batch prediction for multiple PTMs with flexible user control, including the choice of training data. We developed an R package called PTMscape which predicts PTM sites across the proteome based on a unified and comprehensive set of descriptors of the physico-chemical microenvironment of modified sites, with additional downstream analysis modules to test enrichment of individual or pairs of PTMs in protein domains. PTMscape is flexible in the ability to process any major modifications, such as phosphorylation and ubiquitination, while achieving the sensitivity and specificity comparable to single-PTM methods and outperforming other multi-PTM tools. Applying this framework, we expanded proteome-wide coverage of five major PTMs affecting different residues by prediction, especially for lysine and arginine modifications. Using a combination of experimentally acquired sites (PSP) and newly predicted sites, we discovered that the crosstalk among multiple PTMs occur more frequently than by random chance in key protein domains such as histone, protein kinase, and RNA recognition motifs, spanning various biological processes such as RNA processing, DNA damage response, signal transduction, and regulation of cell cycle. These results provide a proteome-scale analysis of crosstalk among major PTMs and can be easily extended to other types of PTM.
Differentially delayed root proteome responses to salt stress in sugar cane varieties.
Pacheco, Cinthya Mirella; Pestana-Calsa, Maria Clara; Gozzo, Fabio Cesar; Mansur Custodio Nogueira, Rejane Jurema; Menossi, Marcelo; Calsa, Tercilio
2013-12-06
Soil salinity is a limiting factor to sugar cane crop development, although in general plants present variable mechanisms of tolerance to salinity stress. The molecular basis underlying these mechanisms can be inferred by using proteomic analysis. Thus, the objective of this work was to identify differentially expressed proteins in sugar cane plants submitted to salinity stress. For that, a greenhouse experiment was established with four sugar cane varieties and two salt conditions, 0 mM (control) and 200 mM NaCl. Physiological and proteomics analyses were performed after 2 and 72 h of stress induction by salt. Distinct physiological responses to salinity stress were observed in the varieties and linked to tolerance mechanisms. In proteomic analysis, the roots soluble protein fraction was extracted, quantified, and analyzed through bidimensional electrophoresis. Gel images analyses were done computationally, where in each contrast only one variable was considered (salinity condition or variety). Differential spots were excised, digested by trypsin, and identified via mass spectrometry. The tolerant variety RB867515 showed the highest accumulation of proteins involved in growth, development, carbohydrate and energy metabolism, reactive oxygen species metabolization, protein protection, and membrane stabilization after 2 h of stress. On the other hand, the presence of these proteins in the sensitive variety was verified only in stress treatment after 72 h. These data indicate that these stress responses pathways play a role in the tolerance to salinity in sugar cane, and their effectiveness for phenotypical tolerance depends on early stress detection and activation of the coding genes expression.
Detection of alternative splice variants at the proteome level in Aspergillus flavus.
Chang, Kung-Yen; Georgianna, D Ryan; Heber, Steffen; Payne, Gary A; Muddiman, David C
2010-03-05
Identification of proteins from proteolytic peptides or intact proteins plays an essential role in proteomics. Researchers use search engines to match the acquired peptide sequences to the target proteins. However, search engines depend on protein databases to provide candidates for consideration. Alternative splicing (AS), the mechanism where the exon of pre-mRNAs can be spliced and rearranged to generate distinct mRNA and therefore protein variants, enable higher eukaryotic organisms, with only a limited number of genes, to have the requisite complexity and diversity at the proteome level. Multiple alternative isoforms from one gene often share common segments of sequences. However, many protein databases only include a limited number of isoforms to keep minimal redundancy. As a result, the database search might not identify a target protein even with high quality tandem MS data and accurate intact precursor ion mass. We computationally predicted an exhaustive list of putative isoforms of Aspergillus flavus proteins from 20 371 expressed sequence tags to investigate whether an alternative splicing protein database can assign a greater proportion of mass spectrometry data. The newly constructed AS database provided 9807 new alternatively spliced variants in addition to 12 832 previously annotated proteins. The searches of the existing tandem MS spectra data set using the AS database identified 29 new proteins encoded by 26 genes. Nine fungal genes appeared to have multiple protein isoforms. In addition to the discovery of splice variants, AS database also showed potential to improve genome annotation. In summary, the introduction of an alternative splicing database helps identify more proteins and unveils more information about a proteome.
Predicting Chemical Toxicity from Proteomics and Computational Chemistry
2008-07-30
similarity spaces, BD Gute and SC Basak, SAR QSAR Environ. Res., 17, 37-51 (2006). Predicting pharmacological and toxicological activity of heterocyclic...affinity of dibenzofurans: a hierarchical QSAR approach, authored jointly by Basak and Mills; Division of Chemical Toxicology iii. Prediction of blood...biodescriptors vis-ä-vis chemodescriptors in predictive toxicology e) Development of integrated QSTR models using the combined set of chemodescriptors and
NASA Astrophysics Data System (ADS)
Veltri, Pierangelo
The use of computer based solutions for data management in biology and clinical science has contributed to improve life-quality and also to gather research results in shorter time. Indeed, new algorithms and high performance computation have been using in proteomics and genomics studies for curing chronic diseases (e.g., drug designing) as well as supporting clinicians both in diagnosis (e.g., images-based diagnosis) and patient curing (e.g., computer based information analysis on information gathered from patient). In this paper we survey on examples of computer based techniques applied in both biology and clinical contexts. The reported applications are also results of experiences in real case applications at University Medical School of Catanzaro and also part of experiences of the National project Staywell SH 2.0 involving many research centers and companies aiming to study and improve citizen wellness.
20 CFR 410.202 - Duration of entitlement; miner.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Duration of entitlement; miner. 410.202 Section 410.202 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Requirements for Entitlement; Duration of Entitlement...
20 CFR 410.202 - Duration of entitlement; miner.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Duration of entitlement; miner. 410.202 Section 410.202 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Requirements for Entitlement; Duration of Entitlement...
20 CFR 410.201 - Conditions of entitlement; miner.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Conditions of entitlement; miner. 410.201 Section 410.201 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Requirements for Entitlement; Duration of Entitlement...
20 CFR 410.201 - Conditions of entitlement; miner.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Conditions of entitlement; miner. 410.201 Section 410.201 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Requirements for Entitlement; Duration of Entitlement...
Academic Entitlement: Relations to Perceptions of Parental Warmth and Psychological Control
ERIC Educational Resources Information Center
Turner, Lisa A.; McCormick, Wesley H.
2018-01-01
Academic entitlement characterises students who expect positive academic outcomes without personal effort. The current study examined the relations of perceived parental warmth and parental psychological control with two dimensions of academic entitlement (i.e., entitled expectations and externalised responsibility) among college students.…
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Accurate Identification of Cancerlectins through Hybrid Machine Learning Technology.
Zhang, Jieru; Ju, Ying; Lu, Huijuan; Xuan, Ping; Zou, Quan
2016-01-01
Cancerlectins are cancer-related proteins that function as lectins. They have been identified through computational identification techniques, but these techniques have sometimes failed to identify proteins because of sequence diversity among the cancerlectins. Advanced machine learning identification methods, such as support vector machine and basic sequence features (n-gram), have also been used to identify cancerlectins. In this study, various protein fingerprint features and advanced classifiers, including ensemble learning techniques, were utilized to identify this group of proteins. We improved the prediction accuracy of the original feature extraction methods and classification algorithms by more than 10% on average. Our work provides a basis for the computational identification of cancerlectins and reveals the power of hybrid machine learning techniques in computational proteomics.
Jones, Alasdair; Goodman, Anna; Roberts, Helen; Steinbach, Rebecca; Green, Judith
2013-08-01
Access to transport is an important determinant of health, and concessionary fares for public transport are one way to reduce the 'transport exclusion' that can limit access. This paper draws on qualitative data from two groups typically at risk of transport exclusion: young people (12-18 years of age, n = 118) and older citizens (60+ years of age, n = 46). The data were collected in London, UK, where young people and older citizens are currently entitled to concessionary bus travel. We focus on how this entitlement is understood and enacted, and how different sources of entitlement mediate the relationship between transport and wellbeing. Both groups felt that their formal entitlement to travel for free reflected their social worth and was, particularly for older citizens, relatively unproblematic. The provision of a concessionary transport entitlement also helped to combat feelings of social exclusion by enhancing recipients' sense of belonging to the city and to a 'community'. However, informal entitlements to particular spaces on the bus reflected less valued social attributes such as need or frailty. Thus in the course of travelling by bus the enactment of entitlements to space and seats entailed the negotiation of social differences and personal vulnerabilities, and this carried with it potential threats to wellbeing. We conclude that the process, as well as the substance, of entitlement can mediate wellbeing; and that where the basis for providing a given entitlement is widely understood and accepted, the risks to wellbeing associated with enacting that entitlement will be reduced. Copyright © 2012 Elsevier Ltd. All rights reserved.
Rouillard, Andrew D.; Wang, Zichen; Ma’ayan, Avi
2015-01-01
With advances in genomics, transcriptomics, metabolomics and proteomics, and more expansive electronic clinical record monitoring, as well as advances in computation, we have entered the Big Data era in biomedical research. Data gathering is growing rapidly while only a small fraction of this data is converted to useful knowledge or reused in future studies. To improve this, an important concept that is often overlooked is data abstraction. To fuse and reuse biomedical datasets from diverse resources, data abstraction is frequently required. Here we summarize some of the major Big Data biomedical research resources for genomics, proteomics and phenotype data, collected from mammalian cells, tissues and organisms. We then suggest simple data abstraction methods for fusing this diverse but related data. Finally, we demonstrate examples of the potential utility of such data integration efforts, while warning about the inherit biases that exist within such data. PMID:26101093
2012-01-01
Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms. PMID:23176545
Mani, D R; Abbatiello, Susan E; Carr, Steven A
2012-01-01
Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms.
Complete fold annotation of the human proteome using a novel structural feature space.
Middleton, Sarah A; Illuminati, Joseph; Kim, Junhyong
2017-04-13
Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this method by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families.
Single Cell Proteomics in Biomedicine: High-dimensional Data Acquisition, Visualization and Analysis
Su, Yapeng; Shi, Qihui; Wei, Wei
2017-01-01
New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. PMID:28128880
Murugaiyan, Jayaseelan; Eravci, Murat; Weise, Christoph; Roesler, Uwe
2017-06-01
Here, we provide the dataset associated with our research article 'label-free quantitative proteomic analysis of harmless and pathogenic strains of infectious microalgae, Prototheca spp.' (Murugaiyan et al., 2017) [1]. This dataset describes liquid chromatography-mass spectrometry (LC-MS)-based protein identification and quantification of a non-infectious strain, Prototheca zopfii genotype 1 and two strains associated with severe and mild infections, respectively, P. zopfii genotype 2 and Prototheca blaschkeae . Protein identification and label-free quantification was carried out by analysing MS raw data using the MaxQuant-Andromeda software suit. The expressional level differences of the identified proteins among the strains were computed using Perseus software and the results were presented in [1]. This DiB provides the MaxQuant output file and raw data deposited in the PRIDE repository with the dataset identifier PXD005305.
Evidence for a vast peptide overlap between West Nile virus and human proteomes.
Capone, Giovanni; Pagoni, Maria; Delfino, Antonella Pesce; Kanduc, Darja
2013-10-01
The primary amino acid sequence of West Nile virus (WNV) polyprotein, GenBank accession number M12294, was analyzed by computional biology. WNV is a mosquito-borne neurotropic flavivirus that has emerged globally as a significant cause of viral encephalitis in humans. Using pentapeptides as scanning units and the perfect peptide match program from PIR International Protein Sequence Database, we compared the WNV polyprotein and the human proteome. WNV polyprotein showed significant sequence similarities to a number of human proteins. Several of these proteins are involved in embryogenesis, neurite outgrowth, cortical neuron branching, formation of mature synapses, semaphorin interactions, and voltage dependent L-type calcium channel subunits. The biocomputional study suggest that common amino acid segments might represent a potential platform for further studies on the neurological pathophysiology of WNV infections. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Big Biomedical data as the key resource for discovery science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toga, Arthur W.; Foster, Ian; Kesselman, Carl
Modern biomedical data collection is generating exponentially more data in a multitude of formats. This flood of complex data poses significant opportunities to discover and understand the critical interplay among such diverse domains as genomics, proteomics, metabolomics, and phenomics, including imaging, biometrics, and clinical data. The Big Data for Discovery Science Center is taking an “-ome to home” approach to discover linkages between these disparate data sources by mining existing databases of proteomic and genomic data, brain images, and clinical assessments. In support of this work, the authors developed new technological capabilities that make it easy for researchers to manage,more » aggregate, manipulate, integrate, and model large amounts of distributed data. Guided by biological domain expertise, the Center’s computational resources and software will reveal relationships and patterns, aiding researchers in identifying biomarkers for the most confounding conditions and diseases, such as Parkinson’s and Alzheimer’s.« less
Big biomedical data as the key resource for discovery science
Toga, Arthur W; Foster, Ian; Kesselman, Carl; Madduri, Ravi; Chard, Kyle; Deutsch, Eric W; Price, Nathan D; Glusman, Gustavo; Heavner, Benjamin D; Dinov, Ivo D; Ames, Joseph; Van Horn, John; Kramer, Roger; Hood, Leroy
2015-01-01
Modern biomedical data collection is generating exponentially more data in a multitude of formats. This flood of complex data poses significant opportunities to discover and understand the critical interplay among such diverse domains as genomics, proteomics, metabolomics, and phenomics, including imaging, biometrics, and clinical data. The Big Data for Discovery Science Center is taking an “-ome to home” approach to discover linkages between these disparate data sources by mining existing databases of proteomic and genomic data, brain images, and clinical assessments. In support of this work, the authors developed new technological capabilities that make it easy for researchers to manage, aggregate, manipulate, integrate, and model large amounts of distributed data. Guided by biological domain expertise, the Center’s computational resources and software will reveal relationships and patterns, aiding researchers in identifying biomarkers for the most confounding conditions and diseases, such as Parkinson’s and Alzheimer’s. PMID:26198305
Complete fold annotation of the human proteome using a novel structural feature space
Middleton, Sarah A.; Illuminati, Joseph; Kim, Junhyong
2017-01-01
Recognition of protein structural fold is the starting point for many structure prediction tools and protein function inference. Fold prediction is computationally demanding and recognizing novel folds is difficult such that the majority of proteins have not been annotated for fold classification. Here we describe a new machine learning approach using a novel feature space that can be used for accurate recognition of all 1,221 currently known folds and inference of unknown novel folds. We show that our method achieves better than 94% accuracy even when many folds have only one training example. We demonstrate the utility of this method by predicting the folds of 34,330 human protein domains and showing that these predictions can yield useful insights into potential biological function, such as prediction of RNA-binding ability. Our method can be applied to de novo fold prediction of entire proteomes and identify candidate novel fold families. PMID:28406174
NASA Technical Reports Server (NTRS)
MacConochie, Ian O.; White, Nancy H.; Mills, Janelle C.
2004-01-01
A program, entitled Weights, Areas, and Mass Properties (or WAMI) is centered around an array of menus that contain constants that can be used in various mass estimating relationships for the ultimate purpose of obtaining the mass properties of Earth-to-Orbit Transports. The current Shuttle mass property data was relied upon heavily for baseline equation constant values from which other options were derived.
14 CFR 36.6 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-01-01
... No. 179, entitled “Precision Sound Level Meters,” dated 1973. (ii) IEC Publication No. 225, entitled... 1966. (iii) IEC Publication No. 651, entitled “Sound Level Meters,” first edition, dated 1979. (iv) IEC... edition, dated 1976. (v) IEC Publication No. 804, entitled “Integrating-averaging Sound Level Meters...
42 CFR 411.163 - Coordination of benefits: Dual entitlement situations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Coordination of benefits: Dual entitlement... Health Plans § 411.163 Coordination of benefits: Dual entitlement situations. (a) Basic rule. Coordination of benefits is governed by this section if an individual is eligible for or entitled to Medicare...
42 CFR 411.163 - Coordination of benefits: Dual entitlement situations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Coordination of benefits: Dual entitlement... Health Plans § 411.163 Coordination of benefits: Dual entitlement situations. (a) Basic rule. Coordination of benefits is governed by this section if an individual is eligible for or entitled to Medicare...
42 CFR 411.163 - Coordination of benefits: Dual entitlement situations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Coordination of benefits: Dual entitlement... Health Plans § 411.163 Coordination of benefits: Dual entitlement situations. (a) Basic rule. Coordination of benefits is governed by this section if an individual is eligible for or entitled to Medicare...
42 CFR 411.163 - Coordination of benefits: Dual entitlement situations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Coordination of benefits: Dual entitlement... Health Plans § 411.163 Coordination of benefits: Dual entitlement situations. (a) Basic rule. Coordination of benefits is governed by this section if an individual is eligible for or entitled to Medicare...
42 CFR 411.163 - Coordination of benefits: Dual entitlement situations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Coordination of benefits: Dual entitlement... Health Plans § 411.163 Coordination of benefits: Dual entitlement situations. (a) Basic rule. Coordination of benefits is governed by this section if an individual is eligible for or entitled to Medicare...
Deshmukh, Rupesh K; Sonah, Humira; Bélanger, Richard R
2016-01-01
Aquaporins (AQPs) are channel-forming integral membrane proteins that facilitate the movement of water and many other small molecules. Compared to animals, plants contain a much higher number of AQPs in their genome. Homology-based identification of AQPs in sequenced species is feasible because of the high level of conservation of protein sequences across plant species. Genome-wide characterization of AQPs has highlighted several important aspects such as distribution, genetic organization, evolution and conserved features governing solute specificity. From a functional point of view, the understanding of AQP transport system has expanded rapidly with the help of transcriptomics and proteomics data. The efficient analysis of enormous amounts of data generated through omic scale studies has been facilitated through computational advancements. Prediction of protein tertiary structures, pore architecture, cavities, phosphorylation sites, heterodimerization, and co-expression networks has become more sophisticated and accurate with increasing computational tools and pipelines. However, the effectiveness of computational approaches is based on the understanding of physiological and biochemical properties, transport kinetics, solute specificity, molecular interactions, sequence variations, phylogeny and evolution of aquaporins. For this purpose, tools like Xenopus oocyte assays, yeast expression systems, artificial proteoliposomes, and lipid membranes have been efficiently exploited to study the many facets that influence solute transport by AQPs. In the present review, we discuss genome-wide identification of AQPs in plants in relation with recent advancements in analytical tools, and their availability and technological challenges as they apply to AQPs. An exhaustive review of omics resources available for AQP research is also provided in order to optimize their efficient utilization. Finally, a detailed catalog of computational tools and analytical pipelines is offered as a resource for AQP research.
Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F
1997-12-01
Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.
Deshmukh, Rupesh K.; Sonah, Humira; Bélanger, Richard R.
2016-01-01
Aquaporins (AQPs) are channel-forming integral membrane proteins that facilitate the movement of water and many other small molecules. Compared to animals, plants contain a much higher number of AQPs in their genome. Homology-based identification of AQPs in sequenced species is feasible because of the high level of conservation of protein sequences across plant species. Genome-wide characterization of AQPs has highlighted several important aspects such as distribution, genetic organization, evolution and conserved features governing solute specificity. From a functional point of view, the understanding of AQP transport system has expanded rapidly with the help of transcriptomics and proteomics data. The efficient analysis of enormous amounts of data generated through omic scale studies has been facilitated through computational advancements. Prediction of protein tertiary structures, pore architecture, cavities, phosphorylation sites, heterodimerization, and co-expression networks has become more sophisticated and accurate with increasing computational tools and pipelines. However, the effectiveness of computational approaches is based on the understanding of physiological and biochemical properties, transport kinetics, solute specificity, molecular interactions, sequence variations, phylogeny and evolution of aquaporins. For this purpose, tools like Xenopus oocyte assays, yeast expression systems, artificial proteoliposomes, and lipid membranes have been efficiently exploited to study the many facets that influence solute transport by AQPs. In the present review, we discuss genome-wide identification of AQPs in plants in relation with recent advancements in analytical tools, and their availability and technological challenges as they apply to AQPs. An exhaustive review of omics resources available for AQP research is also provided in order to optimize their efficient utilization. Finally, a detailed catalog of computational tools and analytical pipelines is offered as a resource for AQP research. PMID:28066459
A new reference implementation of the PSICQUIC web service.
del-Toro, Noemi; Dumousseau, Marine; Orchard, Sandra; Jimenez, Rafael C; Galeota, Eugenia; Launay, Guillaume; Goll, Johannes; Breuer, Karin; Ono, Keiichiro; Salwinski, Lukasz; Hermjakob, Henning
2013-07-01
The Proteomics Standard Initiative Common QUery InterfaCe (PSICQUIC) specification was created by the Human Proteome Organization Proteomics Standards Initiative (HUPO-PSI) to enable computational access to molecular-interaction data resources by means of a standard Web Service and query language. Currently providing >150 million binary interaction evidences from 28 servers globally, the PSICQUIC interface allows the concurrent search of multiple molecular-interaction information resources using a single query. Here, we present an extension of the PSICQUIC specification (version 1.3), which has been released to be compliant with the enhanced standards in molecular interactions. The new release also includes a new reference implementation of the PSICQUIC server available to the data providers. It offers augmented web service capabilities and improves the user experience. PSICQUIC has been running for almost 5 years, with a user base growing from only 4 data providers to 28 (April 2013) allowing access to 151 310 109 binary interactions. The power of this web service is shown in PSICQUIC View web application, an example of how to simultaneously query, browse and download results from the different PSICQUIC servers. This application is free and open to all users with no login requirement (http://www.ebi.ac.uk/Tools/webservices/psicquic/view/main.xhtml).
MinOmics, an Integrative and Immersive Tool for Multi-Omics Analysis.
Maes, Alexandre; Martinez, Xavier; Druart, Karen; Laurent, Benoist; Guégan, Sean; Marchand, Christophe H; Lemaire, Stéphane D; Baaden, Marc
2018-06-21
Proteomic and transcriptomic technologies resulted in massive biological datasets, their interpretation requiring sophisticated computational strategies. Efficient and intuitive real-time analysis remains challenging. We use proteomic data on 1417 proteins of the green microalga Chlamydomonas reinhardtii to investigate physicochemical parameters governing selectivity of three cysteine-based redox post translational modifications (PTM): glutathionylation (SSG), nitrosylation (SNO) and disulphide bonds (SS) reduced by thioredoxins. We aim to understand underlying molecular mechanisms and structural determinants through integration of redox proteome data from gene- to structural level. Our interactive visual analytics approach on an 8.3 m2 display wall of 25 MPixel resolution features stereoscopic three dimensions (3D) representation performed by UnityMol WebGL. Virtual reality headsets complement the range of usage configurations for fully immersive tasks. Our experiments confirm that fast access to a rich cross-linked database is necessary for immersive analysis of structural data. We emphasize the possibility to display complex data structures and relationships in 3D, intrinsic to molecular structure visualization, but less common for omics-network analysis. Our setup is powered by MinOmics, an integrated analysis pipeline and visualization framework dedicated to multi-omics analysis. MinOmics integrates data from various sources into a materialized physical repository. We evaluate its performance, a design criterion for the framework.
Chen, Xiang; Velliste, Meel; Murphy, Robert F.
2010-01-01
Proteomics, the large scale identification and characterization of many or all proteins expressed in a given cell type, has become a major area of biological research. In addition to information on protein sequence, structure and expression levels, knowledge of a protein’s subcellular location is essential to a complete understanding of its functions. Currently subcellular location patterns are routinely determined by visual inspection of fluorescence microscope images. We review here research aimed at creating systems for automated, systematic determination of location. These employ numerical feature extraction from images, feature reduction to identify the most useful features, and various supervised learning (classification) and unsupervised learning (clustering) methods. These methods have been shown to perform significantly better than human interpretation of the same images. When coupled with technologies for tagging large numbers of proteins and high-throughput microscope systems, the computational methods reviewed here enable the new subfield of location proteomics. This subfield will make critical contributions in two related areas. First, it will provide structured, high-resolution information on location to enable Systems Biology efforts to simulate cell behavior from the gene level on up. Second, it will provide tools for Cytomics projects aimed at characterizing the behaviors of all cell types before, during and after the onset of various diseases. PMID:16752421
Exploring the Spatial and Temporal Organization of a Cell’s Proteome
Beck, Martin; Topf, Maya; Frazier, Zachary; Tjong, Harianto; Xu, Min; Zhang, Shihua; Alber, Frank
2013-01-01
To increase our current understanding of cellular processes, such as cell signaling and division, knowledge is needed about the spatial and temporal organization of the proteome at different organizational levels. These levels cover a wide range of length and time scales: from the atomic structures of macromolecules for inferring their molecular function, to the quantitative description of their abundance, and distribution in the cell. Emerging new experimental technologies are greatly increasing the availability of such spatial information on the molecular organization in living cells. This review addresses three fields that have significantly contributed to our understanding of the proteome’s spatial and temporal organization: first, methods for the structure determination of individual macromolecular assemblies, specifically the fitting of atomic structures into density maps generated from electron microscopy techniques; second, research that visualizes the spatial distributions of these complexes within the cellular context using cryo electron tomography techniques combined with computational image processing; and third, methods for the spatial modeling of the dynamic organization of the proteome, specifically those methods for simulating reaction and diffusion of proteins and complexes in crowded intracellular fluids. The long-term goal is to integrate the varied data about a proteome’s organization into a spatially explicit, predictive model of cellular processes. PMID:21094684
David, Matthieu; Fertin, Guillaume; Rogniaux, Hélène; Tessier, Dominique
2017-08-04
The analysis of discovery proteomics experiments relies on algorithms that identify peptides from their tandem mass spectra. The almost exhaustive interpretation of these spectra remains an unresolved issue. At present, an important number of missing interpretations is probably due to peptides displaying post-translational modifications and variants that yield spectra that are particularly difficult to interpret. However, the emergence of a new generation of mass spectrometers that provide high fragment ion accuracy has paved the way for more efficient algorithms. We present a new software, SpecOMS, that can handle the computational complexity of pairwise comparisons of spectra in the context of large volumes. SpecOMS can compare a whole set of experimental spectra generated by a discovery proteomics experiment to a whole set of theoretical spectra deduced from a protein database in a few minutes on a standard workstation. SpecOMS can ingeniously exploit those capabilities to improve the peptide identification process, allowing strong competition between all possible peptides for spectrum interpretation. Remarkably, this software resolves the drawbacks (i.e., efficiency problems and decreased sensitivity) that usually accompany open modification searches. We highlight this promising approach using results obtained from the analysis of a public human data set downloaded from the PRIDE (PRoteomics IDEntification) database.
The role of internal duplication in the evolution of multi-domain proteins.
Nacher, J C; Hayashida, M; Akutsu, T
2010-08-01
Many proteins consist of several structural domains. These multi-domain proteins have likely been generated by selective genome growth dynamics during evolution to perform new functions as well as to create structures that fold on a biologically feasible time scale. Domain units frequently evolved through a variety of genetic shuffling mechanisms. Here we examine the protein domain statistics of more than 1000 organisms including eukaryotic, archaeal and bacterial species. The analysis extends earlier findings on asymmetric statistical laws for proteome to a wider variety of species. While proteins are composed of a wide range of domains, displaying a power-law decay, the computation of domain families for each protein reveals an exponential distribution, characterizing a protein universe composed of a thin number of unique families. Structural studies in proteomics have shown that domain repeats, or internal duplicated domains, represent a small but significant fraction of genome. In spite of its importance, this observation has been largely overlooked until recently. We model the evolutionary dynamics of proteome and demonstrate that these distinct distributions are in fact rooted in an internal duplication mechanism. This process generates the contemporary protein structural domain universe, determines its reduced thickness, and tames its growth. These findings have important implications, ranging from protein interaction network modeling to evolutionary studies based on fundamental mechanisms governing genome expansion.
Turewicz, Michael; Kohl, Michael; Ahrens, Maike; Mayer, Gerhard; Uszkoreit, Julian; Naboulsi, Wael; Bracht, Thilo; Megger, Dominik A; Sitek, Barbara; Marcus, Katrin; Eisenacher, Martin
2017-11-10
The analysis of high-throughput mass spectrometry-based proteomics data must address the specific challenges of this technology. To this end, the comprehensive proteomics workflow offered by the de.NBI service center BioInfra.Prot provides indispensable components for the computational and statistical analysis of this kind of data. These components include tools and methods for spectrum identification and protein inference, protein quantification, expression analysis as well as data standardization and data publication. All particular methods of the workflow which address these tasks are state-of-the-art or cutting edge. As has been shown in previous publications, each of these methods is adequate to solve its specific task and gives competitive results. However, the methods included in the workflow are continuously reviewed, updated and improved to adapt to new scientific developments. All of these particular components and methods are available as stand-alone BioInfra.Prot services or as a complete workflow. Since BioInfra.Prot provides manifold fast communication channels to get access to all components of the workflow (e.g., via the BioInfra.Prot ticket system: bioinfraprot@rub.de) users can easily benefit from this service and get support by experts. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Reese Sorenson's Individual Professional Page
NASA Technical Reports Server (NTRS)
Sorenson, Reese; Nixon, David (Technical Monitor)
1998-01-01
The subject document is a World Wide Web (WWW) page entitled, "Reese Sorenson's Individual Professional Page." Its can be accessed at "http://george.arc.nasa.gov/sorenson/personal/index.html". The purpose of this page is to make the reader aware of me, who I am, and what I do. It lists my work assignments, my computer experience, my place in the NASA hierarchy, publications by me, awards received by me, my education, and how to contact me. Writing this page was a learning experience, pursuant to an element in my Job Description which calls for me to be able to use the latest computers. This web page contains very little technical information, none of which is classified or sensitive.
Secretome profiling of primary human skeletal muscle cells.
Hartwig, Sonja; Raschke, Silja; Knebel, Birgit; Scheler, Mika; Irmler, Martin; Passlack, Waltraud; Muller, Stefan; Hanisch, Franz-Georg; Franz, Thomas; Li, Xinping; Dicken, Hans-Dieter; Eckardt, Kristin; Beckers, Johannes; de Angelis, Martin Hrabe; Weigert, Cora; Häring, Hans-Ulrich; Al-Hasani, Hadi; Ouwens, D Margriet; Eckel, Jürgen; Kotzka, Jorg; Lehr, Stefan
2014-05-01
The skeletal muscle is a metabolically active tissue that secretes various proteins. These so-called myokines have been proposed to affect muscle physiology and to exert systemic effects on other tissues and organs. Yet, changes in the secretory profile may participate in the pathophysiology of metabolic diseases. The present study aimed at characterizing the secretome of differentiated primary human skeletal muscle cells (hSkMC) derived from healthy, adult donors combining three different mass spectrometry based non-targeted approaches as well as one antibody based method. This led to the identification of 548 non-redundant proteins in conditioned media from hSkmc. For 501 proteins, significant mRNA expression could be demonstrated. Applying stringent consecutive filtering using SignalP, SecretomeP and ER_retention signal databases, 305 proteins were assigned as potential myokines of which 12 proteins containing a secretory signal peptide were not previously described. This comprehensive profiling study of the human skeletal muscle secretome expands our knowledge of the composition of the human myokinome and may contribute to our understanding of the role of myokines in multiple biological processes. This article is part of a Special Issue entitled: Biomarkers: A Proteomic Challenge. © 2013.
Photosystems and global effects of oxygenic photosynthesis.
Nelson, Nathan
2011-08-01
Because life on earth is governed by the second law of thermodynamics, it is subject to increasing entropy. Oxygenic photosynthesis, the earth's major producer of both oxygen and organic matter, is a principal player in the development and maintenance of life, and thus results in increased order. The primary steps of oxygenic photosynthesis are driven by four multi-subunit membrane protein complexes: photosystem I, photosystem II, cytochrome b(6)f complex, and F-ATPase. Photosystem II generates the most positive redox potential found in nature and thus capable of extracting electrons from water. Photosystem I generates the most negative redox potential found in nature; thus, it largely determines the global amount of enthalpy in living systems. The recent structural determination of PSII and PSI complexes from cyanobacteria and plants sheds light on the evolutionary forces that shaped oxygenic photosynthesis. This newly available structural information complements knowledge gained from genomic and proteomic data, allowing for a more precise description of the scenario in which the evolution of life systems took place. This article is part of a Special Issue entitled: Regulation of Electron Transport in Chloroplasts. Copyright © 2010 Elsevier B.V. All rights reserved.
Time crawls when you're not having fun: feeling entitled makes dull tasks drag on.
O'Brien, Edward H; Anastasio, Phyllis A; Bushman, Brad J
2011-10-01
All people have to complete dull tasks, but individuals who feel entitled may be more inclined to perceive them as a waste of their "precious" time, resulting in the perception that time drags. This hypothesis was confirmed in three studies. In Study 1, participants with higher trait entitlement (controlling for related variables) thought dull tasks took longer to complete; no link was found for fun tasks. In Study 2, participants exposed to entitled messages thought taking a dull survey was a greater waste of time and took longer to complete. In Study 3, participants subliminally exposed to entitled words thought dull tasks were less interesting, thought they took longer to complete, and walked away faster when leaving the laboratory. Like most resources, time is a resource valued more by entitled individuals. A time-entitlement link provides novel insight into mechanisms underlying self-focus and prosocial dynamics.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-21
... (Obligation To Report Factors Affecting Entitlement) Activity Under OMB Review AGENCY: Veterans Benefits... Report Factors Affecting Entitlement (38 CFR 3.204(a)(1), 38 CFR 3.256(a) and 38 CFR 3.277(b)). OMB... benefits must report changes in their entitlement factors. Individual factors such as income, marital...
20 CFR 410.213 - Duration of entitlement; child.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Duration of entitlement; child. 410.213...; Filing of Claims and Evidence § 410.213 Duration of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning with the first month in which all of the conditions of...
20 CFR 725.218 - Conditions of entitlement; child.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 4 2014-04-01 2014-04-01 false Conditions of entitlement; child. 725.218... Conditions of entitlement; child. (a) An individual is entitled to benefits where he or she meets the... the child of a deceased miner who: (1) Is determined to have died due to pneumoconiosis; or (2) Filed...
20 CFR 725.218 - Conditions of entitlement; child.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 4 2012-04-01 2012-04-01 false Conditions of entitlement; child. 725.218... Conditions of entitlement; child. (a) An individual is entitled to benefits where he or she meets the... the child of a deceased miner who: (1) Was receiving benefits under section 415 or part C of title IV...
20 CFR 725.218 - Conditions of entitlement; child.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Conditions of entitlement; child. 725.218... Conditions of entitlement; child. (a) An individual is entitled to benefits where he or she meets the... the child of a deceased miner who: (1) Was receiving benefits under section 415 or part C of title IV...
20 CFR 410.213 - Duration of entitlement; child.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Duration of entitlement; child. 410.213...; Filing of Claims and Evidence § 410.213 Duration of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning with the first month in which all of the conditions of...
20 CFR 410.212 - Conditions of entitlement; child.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Conditions of entitlement; child. 410.212...; Filing of Claims and Evidence § 410.212 Conditions of entitlement; child. (a) An individual is entitled to benefits if such individual: (1) Is the child or stepchild (see § 410.330) of (i) a deceased miner...
20 CFR 725.219 - Duration of entitlement; child.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Duration of entitlement; child. 725.219... of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning... month in which any one of the following events first occurs: (1) The child dies; (2) The child marries...
20 CFR 725.219 - Duration of entitlement; child.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 4 2012-04-01 2012-04-01 false Duration of entitlement; child. 725.219... of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning... month in which any one of the following events first occurs: (1) The child dies; (2) The child marries...
20 CFR 725.219 - Duration of entitlement; child.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 4 2014-04-01 2014-04-01 false Duration of entitlement; child. 725.219... of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning... month in which any one of the following events first occurs: (1) The child dies; (2) The child marries...
20 CFR 725.219 - Duration of entitlement; child.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Duration of entitlement; child. 725.219... of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning... month in which any one of the following events first occurs: (1) The child dies; (2) The child marries...
20 CFR 410.212 - Conditions of entitlement; child.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Conditions of entitlement; child. 410.212...; Filing of Claims and Evidence § 410.212 Conditions of entitlement; child. (a) An individual is entitled to benefits if such individual: (1) Is the child or stepchild (see § 410.330) of (i) a deceased miner...
20 CFR 725.218 - Conditions of entitlement; child.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Conditions of entitlement; child. 725.218... Conditions of entitlement; child. (a) An individual is entitled to benefits where he or she meets the... the child of a deceased miner who: (1) Was receiving benefits under section 415 or part C of title IV...
20 CFR 725.219 - Duration of entitlement; child.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Duration of entitlement; child. 725.219... of entitlement; child. (a) An individual is entitled to benefits as a child for each month beginning... month in which any one of the following events first occurs: (1) The child dies; (2) The child marries...
20 CFR 725.218 - Conditions of entitlement; child.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 4 2013-04-01 2013-04-01 false Conditions of entitlement; child. 725.218... Conditions of entitlement; child. (a) An individual is entitled to benefits where he or she meets the... the child of a deceased miner who: (1) Was receiving benefits under section 415 or part C of title IV...
ERIC Educational Resources Information Center
Hong, Fu-Yuan; Huang, Der-Hsiang; Lin, Min-Pei; Lin, Hung-Yu
2017-01-01
This study measured the level of academic entitlement in college students using a performance promotion goal questionnaire, an academic entitlement group norm questionnaire, a cultural value orientation questionnaire, and an academic entitlement questionnaire, with 297 college students. The research findings of this study could be used to identify…
Code of Federal Regulations, 2010 CFR
2010-04-01
... intentional homicide on entitlement to benefits. 725.228 Section 725.228 Employees' Benefits EMPLOYMENT... intentional homicide on entitlement to benefits. An individual who has been convicted of the felonious and intentional homicide of a miner or other beneficiary shall not be entitled to receive any benefits payable...
Computational biology in the cloud: methods and new insights from computing at scale.
Kasson, Peter M
2013-01-01
The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.
Parallel programming of industrial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, M; Koniges, A; Simon, H
1998-07-21
In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from thesemore » applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).« less
NASA Technical Reports Server (NTRS)
Keith, T. G., Jr.; Afjeh, A. A.; Jeng, D. R.; White, J. A.
1985-01-01
A description of a computer program entitled VORTEX that may be used to determine the aerodynamic performance of horizontal axis wind turbines is given. The computer code implements a vortex method from finite span wind theory and determines the induced velocity at the rotor disk by integrating the Biot-Savart law. It is assumed that the trailing helical vortex filaments form a wake of constant diameter (the rigid wake assumption) and travel downstream at the free stream velocity. The program can handle rotors having any number of blades which may be arbitrarily shaped and twisted. Many numerical details associated with the program are presented. A complete listing of the program is provided and all program variables are defined. An example problem illustrating input and output characteristics is solved.
1999-05-14
The Food and Drug Administration (FDA) is announcing the availability of a new compliance policy guide (CPG) entitled "Year 2000 (Y2K) Computer Compliance" (section 160-800). This guidance document represents the agency's current thinking on the manufacturing and distribution of domestic and imported products regulated by FDA using computer systems that may not perform properly before, or during, the transition to the year 2000 (Y2K). The text of the CPG is included in this notice. This compliance guidance document is an update to the Compliance Policy Guides Manual (August 1996 edition). It is a new CPG, and it will be included in the next printing of the Compliance Policy Guides Manual. This CPG is intended for FDA personnel, and it is available electronically to the public.
tRNAmodpred: a computational method for predicting posttranscriptional modifications in tRNAs
Machnicka, Magdalena A.; Dunin-Horkawicz, Stanislaw; de Crécy-Lagard, Valerie; Bujnicki, Janusz M.
2016-01-01
tRNA molecules contain numerous chemically altered nucleosides, which are formed by enzymatic modification of the primary transcripts during the complex tRNA maturation process. Some of the modifications are introduced by single reactions, while other require complex series of reactions carried out by several different enzymes. The location and distribution of various types of modifications vary greatly between different tRNA molecules, organisms and organelles. We have developed a computational method tRNAmodpred, for predicting modifications in tRNA sequences. Briefly, our method takes as an input one or more unmodified tRNA sequences and a set of protein sequences corresponding to a proteome of a cell. Subsequently it identifies homologs of known tRNA modification enzymes in the proteome, predicts tRNA modification activities and maps them onto known pathways of RNA modification from the MODOMICS database. Thereby, theoretically possible modification pathways are identified, and products of these modification reactions are proposed for query tRNAs. This method allows for predicting modification patterns for newly sequenced genomes as well as for checking tentative modification status of tRNAs from one species treated with enzymes from another source, e.g. to predict the possible modifications of eukaryotic tRNAs expressed in bacteria. tRNAmodpred is freely available as web server at http://genesilico.pl/trnamodpred/. PMID:27016142
Genomics, proteomics, MEMS and SAIF: which role for diagnostic imaging?
Grassi, R; Lagalla, R; Rotondo, A
2008-09-01
In these three words--genomics, proteomics and nanotechnologies--is the future of medicine of the third millennium, which will be characterised by more careful attention to disease prevention, diagnosis and treatment. Molecular imaging appears to satisfy this requirement. It is emerging as a new science that brings together molecular biology and in vivo imaging and represents the key for the application of personalized medicine. Micro-PET (positron emission tomography), micro-SPECT (single photon emission computed tomography), micro-CT (computed tomography), micro-MR (magnetic resonance), micro-US (ultrasound) and optical imaging are all molecular imaging techniques, several of which are applied only in preclinical settings on animal models. Others, however, are applied routinely in both clinical and preclinical setting. Research on small animals allows investigation of the genesis and development of diseases, as well as drug efficacy and the development of personalized therapies, through the study of biological processes that precede the expression of common symptoms of a pathology. Advances in molecular imaging were made possible only by collaboration among scientists in the fields of radiology, chemistry, molecular and cell biology, physics, mathematics, pharmacology, gene therapy and oncology. Although until now researchers have traditionally limited their interactions, it is only by increasing these connections that the current gaps in terminology, methods and approaches that inhibit scientific progress can be eliminated.
Uddin, Reaz; Sufian, Muhammad
2016-01-01
Infections caused by Salmonella enterica, a Gram-negative facultative anaerobic bacteria belonging to the family of Enterobacteriaceae, are major threats to the health of humans and animals. The recent availability of complete genome data of pathogenic strains of the S. enterica gives new avenues for the identification of drug targets and drug candidates. We have used the genomic and metabolic pathway data to identify pathways and proteins essential to the pathogen and absent from the host. We took the whole proteome sequence data of 42 strains of S. enterica and Homo sapiens along with KEGG-annotated metabolic pathway data, clustered proteins sequences using CD-HIT, identified essential genes using DEG database and discarded S. enterica homologs of human proteins in unique metabolic pathways (UMPs) and characterized hypothetical proteins with SVM-prot and InterProScan. Through this core proteomic analysis we have identified enzymes essential to the pathogen. The identification of 73 enzymes common in 42 strains of S. enterica is the real strength of the current study. We proposed all 73 unexplored enzymes as potential drug targets against the infections caused by the S. enterica. The study is comprehensive around S. enterica and simultaneously considered every possible pathogenic strain of S. enterica. This comprehensiveness turned the current study significant since, to the best of our knowledge it is the first subtractive core proteomic analysis of the unique metabolic pathways applied to any pathogen for the identification of drug targets. We applied extensive computational methods to shortlist few potential drug targets considering the druggability criteria e.g. Non-homologous to the human host, essential to the pathogen and playing significant role in essential metabolic pathways of the pathogen (i.e. S. enterica). In the current study, the subtractive proteomics through a novel approach was applied i.e. by considering only proteins of the unique metabolic pathways of the pathogens and mining the proteomic data of all completely sequenced strains of the pathogen, thus improving the quality and application of the results. We believe that the sharing of the knowledge from this study would eventually lead to bring about novel and unique therapeutic regimens against the infections caused by the S. enterica.
Top-Down Proteomics and Farm Animal and Aquatic Sciences.
Campos, Alexandre M O; de Almeida, André M
2016-12-21
Proteomics is a field of growing importance in animal and aquatic sciences. Similar to other proteomic approaches, top-down proteomics is slowly making its way within the vast array of proteomic approaches that researchers have access to. This opinion and mini-review article is dedicated to top-down proteomics and how its use can be of importance to animal and aquatic sciences. Herein, we include an overview of the principles of top-down proteomics and how it differs regarding other more commonly used proteomic methods, especially bottom-up proteomics. In addition, we provide relevant sections on how the approach was or can be used as a research tool and conclude with our opinions of future use in animal and aquatic sciences.
Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics.
Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L
2016-11-04
The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/ .
Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics
Deutsch, Eric W.; Sun, Zhi; Campbell, David S.; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S.; Moritz, Robert L.
2016-01-01
The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances – a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ~20,000 primary isoforms plus contaminants to a very large database that includes almost all non-redundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/. PMID:27577934
Curty, N; Kubitschek-Barreira, P H; Neves, G W; Gomes, D; Pizzatti, L; Abdelhay, E; Souza, G H M F; Lopes-Bezerra, L M
2014-01-31
Blood vessel invasion is a key feature of invasive aspergillosis. This angioinvasion process contributes to tissue thrombosis, which can impair the access of leukocytes and antifungal drugs to the site of infection. It has been demonstrated that human umbilical vein endothelial cells (HUVECs) are activated and assume a prothrombotic phenotype following contact with Aspergillus fumigatus hyphae or germlings, a process that is independent of fungus viability. However, the molecular mechanisms by which this pathogen can activate endothelial cells, together with the endothelial pathways that are involved in this process, remain unknown. Using a label-free approach by High Definition Mass Spectrometry (HDMS(E)), differentially expressed proteins were identified during HUVEC-A. fumigatus interaction. Among these, 89 proteins were determined to be up- or down-regulated, and another 409 proteins were exclusive to one experimental condition: the HUVEC control or HUVEC:AF interaction. The in silico predictions provided a general view of which biological processes and/or pathways were regulated during HUVEC:AF interaction, and they mainly included cell signaling, immune response and hemostasis pathways. This work describes the first global proteomic analysis of HUVECs following interaction with A. fumigatus germlings, the fungus morphotype that represents the first step of invasion and dissemination within the host. A. fumigatus causes the main opportunistic invasive fungal infection related to neutropenic hematologic patients. One of the key steps during the establishment of invasive aspergillosis is angioinvasion but the mechanism associated with the interaction of A. fumigatus with the vascular endothelium remains unknown. The identification of up- and down-regulated proteins expressed by human endothelial cells in response to the fungus infection can contribute to reveal the mechanism of endothelial response and, to understand the physiopathology of this high mortality disease. This article is part of a Special Issue entitled: Trends in Microbial Proteomics. © 2013 Elsevier B.V. All rights reserved.
Jadhav, Snehal; Sevior, Danielle; Bhave, Mrinal; Palombo, Enzo A
2014-01-31
Conventional methods used for primary detection of Listeria monocytogenes from foods and subsequent confirmation of presumptive positive samples involve prolonged incubation and biochemical testing which generally require four to five days to obtain a result. In the current study, a simple and rapid proteomics-based MALDI-TOF MS approach was developed to detect L. monocytogenes directly from selective enrichment broths. Milk samples spiked with single species and multiple species cultures were incubated in a selective enrichment broth for 24h, followed by an additional 6h secondary enrichment. As few as 1 colony-forming unit (cfu) of L. monocytogenes per mL of initial selective broth culture could be detected within 30h. On applying the same approach to solid foods previously implicated in listeriosis, namely chicken pâté, cantaloupe and Camembert cheese, detection was achieved within the same time interval at inoculation levels of 10cfu/mL. Unlike the routine application of MALDI-TOF MS for identification of bacteria from solid media, this study proposes a cost-effective and time-saving detection scheme for direct identification of L. monocytogenes from broth cultures.This article is part of a Special Issue entitled: Trends in Microbial Proteomics. Globally, foodborne diseases are major causes of illness and fatalities in humans. Hence, there is a continual need for reliable and rapid means for pathogen detection from food samples. Recent applications of MALDI-TOF MS for diagnostic microbiology focused on detection of microbes from clinical specimens. However, the current study has emphasized its use as a tool for detecting the major foodborne pathogen, Listeria monocytogenes, directly from selective enrichment broths. This proof-of-concept study proposes a detection scheme that is more rapid and simple compared to conventional methods of Listeria detection. Very low levels of the pathogen could be identified from different food samples post-enrichment in selective enrichment broths. Use of this scheme will facilitate rapid and cost-effective testing for this important foodborne pathogen. © 2013.
Castellanos-Martínez, Sheila; Diz, Angel P; Álvarez-Chaver, Paula; Gestal, Camino
2014-06-13
The immune system of cephalopods is poorly known to date. The lack of genomic information makes difficult to understand vital processes like immune defense mechanisms and their interaction with pathogens at molecular level. The common octopus Octopus vulgaris has a high economic relevance and potential for aquaculture. However, disease outbreaks provoke serious reductions in production with potentially severe economic losses. In this study, a proteomic approach is used to analyze the immune response of O. vulgaris against the coccidia Aggregata octopiana, a gastrointestinal parasite which impairs the cephalopod nutritional status. The hemocytes and plasma proteomes were compared by 2-DE between sick and healthy octopus. The identities of 12 differentially expressed spots and other 27 spots without significant alteration from hemocytes, and 5 spots from plasma, were determined by mass spectrometry analysis aided by a six reading-frame translation of an octopus hemocyte RNA-seq database and also public databases. Principal component analysis pointed to 7 proteins from hemocytes as the major contributors to the overall difference between levels of infection and so could be considered as potential biomarkers. Particularly, filamin, fascin and peroxiredoxin are highlighted because of their implication in octopus immune defense activity. From the octopus plasma, hemocyanin was identified. This work represents a first step forward in order to characterize the protein profile of O. vulgaris hemolymph, providing important information for subsequent studies of the octopus immune system at molecular level and also to the understanding of the basis of octopus tolerance-resistance to A. octopiana. The immune system of cephalopods is poorly known to date. The lack of genomic information makes difficult to understand vital processes like immune defense mechanisms and their interaction with pathogens at molecular level. The study herein presented is focused to the comprehension of the octopus immune defense against a parasite infection. Particularly, it is centered in the host-parasite relationship developed between the octopus and the protozoan A. octopiana, which induces severe gastrointestinal injuries in octopus that produce a malabsorption syndrome. The common octopus is a commercially important species with a high potential for aquaculture in semi-open systems, and this pathology reduces the condition of the octopus populations on-growing in open-water systems resulting in important economical loses. This is the first proteomic approach developed on this host-parasite relationship, and therefore, the contribution of this work goes from i) ecological, since this particular relationship is tending to be established as a model of host-parasite interaction in natural populations; ii) evolutionary, due to the characterization of immune molecules that could contribute to understand the functioning of the immune defense in these highly evolved mollusks; and iii) to economical view. The results of this study provide an overview of the octopus hemolymph proteome. Furthermore, proteins influenced by the level of infection and implicated in the octopus cellular response are also showed. Consequently, a set of biomarkers for disease resistance is suggested for further research that could be valuable for the improvement of the octopus culture, taken into account their high economical value, the declining of landings and the need for the diversification of reared species in order to ensure the growth of the aquaculture activity. Although cephalopods are model species for biomedical studies and possess potential in aquaculture, their genomes have not been sequenced yet, which limits the application of genomic data to research important biological processes. Similarly, the octopus proteome, like other non-model organisms, is poorly represented in public databases. Most of the proteins were identified from an octopus' hemocyte RNA-seq database that we have performed, which will be the object of another manuscript in preparation. Therefore, the need to increase molecular data from non-model organisms is herein highlighted. Particularly, here is encouraged to expand the knowledge of the genomic of cephalopods in order to increase successful protein identifications. This article is part of a Special Issue entitled: Proteomics of non-model organisms. Copyright © 2013 Elsevier B.V. All rights reserved.
Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford
2010-01-01
The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337
Student Entitlement: Issues and Strategies for Confronting Entitlement in the Classroom and Beyond
ERIC Educational Resources Information Center
Lippmann, Stephen; Bulanda, Ronald E.; Wagenaar, Theodore C.
2009-01-01
While not representative of all students, those who demonstrate a sense of entitlement demand a great deal of instructors' time and energy. Our article places student entitlement in its social context, with specific attention to the prevalence of the consumer mentality, grade inflation, and the self-esteem of the student generation. We then…
22 CFR 40.203 - Alien entitled to A, E, or G nonimmigrant classification.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Alien entitled to A, E, or G nonimmigrant... § 40.203 Alien entitled to A, E, or G nonimmigrant classification. An alien entitled to nonimmigrant... to receive an immigrant visa until the alien executes a written waiver of all rights, privileges...
22 CFR 40.203 - Alien entitled to A, E, or G nonimmigrant classification.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Alien entitled to A, E, or G nonimmigrant... § 40.203 Alien entitled to A, E, or G nonimmigrant classification. An alien entitled to nonimmigrant... to receive an immigrant visa until the alien executes a written waiver of all rights, privileges...
22 CFR 40.203 - Alien entitled to A, E, or G nonimmigrant classification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Alien entitled to A, E, or G nonimmigrant... § 40.203 Alien entitled to A, E, or G nonimmigrant classification. An alien entitled to nonimmigrant... to receive an immigrant visa until the alien executes a written waiver of all rights, privileges...
Self-Compassion as a Predictor of Psychological Entitlement in Turkish University Students
ERIC Educational Resources Information Center
Sahranç, Ümit
2015-01-01
The purpose of this study is to examine the predictive role of self-compassion on psychological entitlement. Participants were 331 university students (205 women, 126 men, M age = 20.5 years.). In this study, the Self-compassion Scale and the Psychological Entitlement Scale were used to assess self-compassion and psychological entitlement. The…
38 CFR 10.35 - Claim of mother entitled by reason of unmarried status.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Claim of mother entitled... OF VETERANS AFFAIRS ADJUSTED COMPENSATION Adjusted Compensation; General § 10.35 Claim of mother entitled by reason of unmarried status. Claim of a mother for the benefits to which she may be entitled by...
38 CFR 10.35 - Claim of mother entitled by reason of unmarried status.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Claim of mother entitled... OF VETERANS AFFAIRS ADJUSTED COMPENSATION Adjusted Compensation; General § 10.35 Claim of mother entitled by reason of unmarried status. Claim of a mother for the benefits to which she may be entitled by...
20 CFR 408.816 - When does SVB entitlement end due to death?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false When does SVB entitlement end due to death... CERTAIN WORLD WAR II VETERANS Suspensions and Terminations Termination § 408.816 When does SVB entitlement end due to death? Your SVB entitlement ends with the month in which you die. Payments are terminated...